Month: March 2014

Comparison of decimal values with NBi

When you’re developing some tests with NBi it’s not uncommon that you’ve some issues to compare decimal values and this is especially true if your tests involve an MDX query. Usually, you’ll meet a pitfall where NBi is comparing two rows and returns that the value “10.3” is different than “10.30” or even “10,30”. The following test simulate this issue:

<testSuite name="Compare decimal values" xmlns="http://NBi/TestSuite">
	<settings>
		<default apply-to="everywhere">
			<connectionString>Data Source=mhknbn2kdz.database.windows.net;Initial Catalog=AdventureWorks2012;User Id=sqlfamily;password=sqlf@m1ly</connectionString>
		</default>
	</settings>
	<test name="Keys on decimal value" uid="0001">
		<system-under-test>
			<execution>
				<query>
					select cast(10.3 as decimal(3,1))
				</query>
			</execution>
		</system-under-test>
		<assert>
			<equalTo>
				<resultSet>
					<row>
						<cell>10.30</cell>
					</row>
				</resultSet>
			</equalTo>
		</assert>
	</test>
</testSuite>

multiple-representations-of-numeric
After the execution of this test, you’ll receive a report of failure similar to the screenshot at your right. If you analyze this report you’ll understand that the representation of the numerical value “10.3” has not been handled the same way in the system-under-test and in the assert. In the system-under-test, NBi receives the information from Sql Server that the value returned is a decimal (so a numeric) and NBi will handle this value as a numeric. It means that when displaying this value, NBi will use your regional settings and, in my case, it will use a comma as decimal separator. The display will be “10,3” as highlighted in blue on the screenshot. On the other hand, for the assertion, NBi is taking the assumption that the value “10.30” is a textual value. So the representation in the report will not be modified and the value “10.30” will be displayed (highlighted in red). When NBi compares the content of two cells, it always use the type specified in the assertion. So in this case, NBi compares effectively two textual values “10,3” and “10.30”. Following this logic, the two rows are effectively different.

type-in-the-headerThe good news is that you can easily tune this behavior. In fact if you’re vigilant, you’ve probably seen that NBi is warning you about the comparison as textual values in the failure’s report. In the header of the tables, NBi explains the considered type for the comparison. In this sample, NBi explains that it’s currently using a type “Text” (circled in red).

To change this, you’ll need to overwrite the default configuration of NBi. By default, NBi considers that all the columns of a result-set are textual values except the last one which is numeric. But in the case of result-set that only one column, NBi considers this column as textual. To change this, you need to express a column definition in your assertion. This definition is highlighted in the code below at the line 3.

<assert>
    <equalTo>
        <column index="0" role="value" type="numeric"/>
        <resultSet>
            <row>
                <cell>10.30</cell>
            </row>
        </resultSet>
    </equalTo>
</assert>

Putting in place this configuration of the result-set, we’ve defined that the column is now a “numeric” and it’s role is “value” and not anymore a “key”. NBi will now compare two numeric values “10.3” and “10.30”, which are identical, and return a success.
Green-message

Advertisements

How to compare the result-sets returned by an SQL query and an MDX query with NBi

If you’re designing an SSAS cube and you’re putting into action some test automation, you’ll probably want to compare the results of two queries. The first one on a source system or on your sql data warehouse and the second one on your cube.

It’s a common and simple task for NBi. This framework is designed to ensure that this kind of test doesn’t become quickly too complex. The implementation of this test shouldn’t take more than a few seconds. As for other tests designed for this framework, you’ll need to define two objects: a system-under-test (the result-set of the query on the cube) and an assertion (it’s equal to the result-set of the query on the data warehouse). The basic structure of the test will be:

<test name="Compare Cube (MDX) with datawarehouse (SQL)">
<system-under-test>
         <execution>
             <query connectionString="@AsAdventureWorks2012">
                 <!-- Here my MDX Query -->
             </query>
         </execution>
    </system-under-test>
    <assert>
         <equalTo>
             <query connectionString="@SqlAdventureWorks2012">
                 <!-- Here my SQL Query -->
             </query>
         </equalTo>
     </assert>
</test>

The two queries can be directly embedded into the test file or written in an external file. Both options have their own advantages. If you’re placing your query into the test it will be helpful for test edition to have only one file to open. On the opposite, a query written in an external file, could be open directly in SQL Server Management Studio, this file could also be referenced more than once by your test-suite. The choice is not obvious and is depending of what you’ll exactly do with this query, but a general guidance could be to use an embedded query if this query is not too large and is only used once in your test-suite.

To illustrate our sample, we’ll check that the “count of customers” is indeed correctly implemented in the SSAS cube of Adventure Works. For this we’ll use the “Date” dimension as a dicer. The corresponding MDX query will be:

select
    [Measures].[Internet Order Count] on 0,
    non empty([Date].[Calendar Year].children) on 1
from
    [Adventure Works]

MdxResult
Execution of this MDX query will return the preceding result-set: Four rows of two cells. The first cell of a row is the year and the second the count of customers.

In the corresponding SQL data warehouse, we’ll need to write the following query.

select
	'CY ' + cast(year([OrderDate]) as varchar(50)) as CivilYear
	, count([CustomerID]) as CountOfCustomer
from
	[Sales].[SalesOrderHeader]
where
	[OnlineOrderFlag]=1
group by
	'CY ' + cast(year([OrderDate]) as varchar(50))

SqlResult
The corresponding result-set is displayed at the left. Note that for this result-set, there is no guarantee of the ordering of the rows. In this case, 2006 is at the top and 2007 at the end. This is not a problem for NBi, the framework doesn’t take into account the ordering of the rows to compare two result-sets.

If we put together the two queries and the test definition, we’ve the following test:

<test name="">
    <system-under-test>
         <execution>
             <query connectionString="@AsAdventureWorks2012">
                 select
					[Measures].[Internet Order Count] on 0,
					non empty([Date].[Calendar Year].children) on 1
				 from
					[Adventure Works]
             </query>
         </execution>
    </system-under-test>
     <assert>
         <equalTo>
             <query connectionString="@SqlAdventureWorks2012">
                 select
					'CY ' + cast(year([OrderDate]) as varchar(50)) as CivilYear
					, count([CustomerID]) as CountOfCustomer
				 from 
				 	[Sales].[SalesOrderHeader]
				 where
				 	[OnlineOrderFlag]=1
				 group by 
					'CY ' + cast(year([OrderDate]) as varchar(50))
             </query>
         </equalTo>
     </assert>
</test>

Positive ResultThe execution of this test in the test runner will return a positive result (green). As already said before, the fact that the rows are not ordered the same way on the two result-sets is not an issue for NBi. The framework doesn’t rely on rows ordering when executing the comparison but on keys matching. By default, NBi will consider the first cells of the row as part of the key. On our sample, NBi will look for the row “CY 2005” (first row of the MDX query) into the second result-set. When this row is found in the second-result-set, NBi will compare its value (content of last cell of the row / 1013) to the expected value (1013). NBi will continue like this for each row. In this case NBi will take the conclusion that the result-sets are equal and validate the test.

If we slightly change our SQL to a “distinct count” in place of a “count”,

select
	'CY ' + cast(year([OrderDate]) as varchar(50)) as CivilYear
	, count(distinct [CustomerID]) as CountOfCustomer
from
	[Sales].[SalesOrderHeader]
where
	[OnlineOrderFlag]=1
group by
	'CY ' + cast(year([OrderDate]) as varchar(50))

Reload-Testsit will impact the result-set and the test should not be green any more. To ensure, this you can run the test after adaptation of your query in your nbits file. Don’t forget to reload the test (CTRL+R) in NUnit to ensure your latest version of your test-suite is taken into account.
Negative Result
Then execute the test and you’ll see a red bar, synonym of failure.

delta-resultsetsThe exception described just under is “Execution of the query doesn’t match the expected result”. If you copy paste the full message displayed in the textbox, you’ll receive additional valuable information about the difference between the two result-sets. At the top of the message NBi will display the first 10 (max.) rows of the two result-sets (sections highlighted in yellow). Just under, NBi will display the rows missing, unexpected or with delta in the values (highlighted in green). In the sample, two rows are considered as non-matching because their values in the two result-sets diverge.

Now, if we go back to our initial SQL query (so without the distinct) but we introduce arbitrarily an additional where clause, we can test the behavior of NBi when the two result-sets don’t match in term of rows’ count.

select
	'CY ' + cast(year([OrderDate]) as varchar(50)) as CivilYear
	, count([CustomerID]) as CountOfCustomer
from
	[Sales].[SalesOrderHeader]
where
	[OnlineOrderFlag]=1
        and year([OrderDate])<2008
group by
	'CY ' + cast(year([OrderDate]) as varchar(50))

If we just add this line to our query in the nbits file, we’ll have troubles when trying to load the test in NUnit. Indeed, the symbol “<" is ambiguous in an xml file and so we should mark it as a non-xml-symbol. This is quickly done by introducing a CDATA element in the xml. The assertion should look as

<assert>
    <equalTo>
        <query connectionString="@SqlAdventureWorks2012">
            <![CDATA[
            select
                'CY ' + cast(year([OrderDate]) as varchar(50)) as CivilYear
                , count(distinct [CustomerID]) as CountOfCustomer
            from 
                [Sales].[SalesOrderHeader]
            where
                [OnlineOrderFlag]=1
                and year([OrderDate])<2008
            group by 
                'CY ' + cast(year([OrderDate]) as varchar(50))
            ]]>
        </query>
    </equalTo>
</assert>

unexpected-rowsAs expected, if the adapted test is run, NBi will display a red light and will help you to find the difference between the result-sets by identifying the unexpected row in the system-under-test (the section is highlighted in red).

In the next posts we’ll see how to tune NBi to detect missing rows and unexpected rows in more complex cases but also how to introduce a tolerance when comparing values of rows.

Test automation for a SSAS cube

I’ve never really understood why data-oriented people are so slow and unwilling to put into action the best practices from traditional IT. I mean, in 2014, it’s not rare to find a database’s code not protected by a source control or to find an etl-developer fixing a bug directly in production environment. It’s just the same for “automated tests”, a standard in most .Net/Java/Python development but not so common as soon as you go to the data side (Sql, Olap, Etl, …).

Test automation is the use of a dedicated software to control the execution of tests and the comparison of actual outcomes with expected outcomes. All the steps involved in this process of tests’ execution are handled by a software. It means that a software will prepare the environment, execute the tests and also decide if these tests are positive or not.

The big advantage of test automation is repeatability. You don’t care how many times you’ll need to execute the tests, it will be done by a software (usually lowering the costs of testing sessions). But you’ve a few other advantages not so obvious such as  the lake of distraction and human mistakes. To illustrate this, think about the last time you’ve compared two columns of 25 figures. If one figure is not the same than it’s vis-a-vis, are you sure you’ll spot it? Even if you’ve already  made this comparison 10 times with success? A software doesn’t suffer from distraction or lassitude.

no-automation

A lot of people claim that test automation is time expensive. It’s true that the implementation of tests will surely have a higher cost with a strategy of automation than with any other strategy. But you’re usually implementing your test once and executing them several times.The cost involved during the execution of an automated test is at zero (or close to it). The price of an automated test-suite will become affordable if you plan to execute your tests more than a few times. Testing often is a prerequisite for a quality software. Test as often as possible and it’ll save you time. Time is also generally a concern for people avoiding to automate their test-suite. They are in the rush and want to save time. For the I only have one answer, you don’t have time because you’ve not automated your test-suite.

Testing a cube with or without automation is not different. You’ll need to proceed to the same tests: cube structure, dimensions’ members and finally measures. Other points such as security, performances, load, should also be tested but are out-of-scope for this post.

When you test the structure of a cube, you should check if all expected dimensions, hierarchies, measures and so on are there. It’s also important that you’ve no unexpected artifacts. You should also ensure that the relations between dimensions and facts are effectively available for end-users.

About dimensions’ members, you’ll be interested to assert their availability and validate their ordering (alphabetically or numerically or chronologically or specific). To check the availability of members, you’ll usually compare the content of your cube with your source system or with your data warehouse. But it’s not necessary the smarter idea because you’ll have to implement in your test all the rules written in your etl or in the security layer and filtering the members visible for the end-user. It could be tricky and error-prone. It’s sometimes smarter to ask your stakeholders what are the members that they are sure they want to show and how much they are expecting. Based on this information, it’s possible to ensure that the members given by your stakeholders are effectively visible in your cube and that the actual count of members is indeed close to the figure given by your stakeholders.

About measures, you need to check that aggregations are correct (not a sum in place of an average), that they are correctly linked to the expected members for each dimension and that all calculations are correctly based on other available facts. You’ve really a huge diversity of tests to think about. For your reference, again, the source application or the data warehouse are interesting but don’t forget also to check the likelihood with stakeholders. They probably have good ideas about ranges for all the figures they are requesting, you could find some existing reports on these facts and these reports are also a good source of expectations, if you introduce a tolerance.

Now that you’ve an idea about what to test, you could ask … “how to effectively implement this automation”! For test automation, you’ll need a framework. The not-smart-option will be to try to implement one by yourself: these frameworks are complex and require good testing. So it’s probably better and safer to use one of the frameworks available on the net. The surprise could be that there is not a lot of choice. Indeed as explained before this kind of practice (test automation) is unfortunately not so common in the data world.

For next blog posts, I’ll use NBi to show you some samples. This framework is open-source and free (as a beer or a speech), actively maintained, documented and supports (without tricks or hacks) all the tests described above. More, you don’t need to write some code in java or python or c#, just some xml tags are enough to implement your tests. You don’t need a compiler either. I’d recommend that you try to download it and install it (check its own documentation for this).

In the next posts, I’ll cover some use-cases such as comparison of queries’ result set to static values or source systems, to give you some guidance about test automation for an SSAS cube with NBi.

First step … a map and a direction

Welcome on board!

As usual, I want to touch a lot of topics and if possible each of them simultaneously. But, to be short, this blog should cover the world of “information” from roots (data and processes) to its derivatives (knowledge and wisdom). How to capture the needs, how to design this ideal world, how to build it, how to assert its quality and improve it and finally how to support it.Compass and Map

Evolution is the keyword. We, as humans, need to evolve or we’ll die but we can’t forget the past or it’ll just be a repetition of the same mistakes or an eternal first step. It’s the same for the information world we’re building. Today, we must prepare the steps of tomorrow and support the foundation built yesterday. I’ll share with you my vision of how we could do this.

Be prepared to dive in a world of Domain Models, Data Governance, Business Intelligence, Big Data where Requirements, Design, Implementation, Quality Assessments are achievable artifacts.