NBi

Presenting before the summer ’17

I’ll be really busy presenting before this summer. With four sessions (including three completely new) to deliver in less than 30 days, I’ll have less times for my open-source projects.

busy.jpg

The first session will be at Charleroi (Hainaut, Belgium) on the 2nd of June for the Journée Agile 2017. This is a bit special for me, I never talked so close to my hometown and it will be my first time in Wallonia since 10 years. No one is a prophet in his own country, even if it is a small one! It will also be different because the main topic of the conference is not SQL Server or databases but agility. A concept that I use since 10 years, I’m really not a newcomer. Agility and BI are two concepts with little traction, let’s break the walls! I’ll speak (in French) about testing automation for BI solutions, this is the session that I’m presenting across Europe but I’ll adapt it a bit to the attendance and stress the agility topic (and a bit less time on the demos). If you’re interested by the agile topic (and if you’re not, you should really care about this … the world has changed, hurry up!), a few seats are still available.

The next stop will be at SQL Saturday Rheinland (Sankt Augustin, North Rhine-Westphalia, Germany – University between Bonn and Köln). The session will be about the M Language (the language used for Power Query/Power BI to get and transform data) and how much it’s fun to code with a functional language. It’ll be the first time that I’ll present this session as-is but I already included large parts of this session in some blog posts or show-cases. Not a lot of stress to be ready on time. About M, the main topic of this session, I’m really impressed by the power of this language and I’ll try to share a bit of my knowledge and help you to make the first steps in the right direction. Don’t be confused, it’s not a session about Power Query/Get & Transform, I won’t use the UI … I’ll show you the code and how to resolve more advanced cases that you won’t be able to implement with just the user interface.

SQLSaturday #605 - Rheinland 2017

Then, at the end of the month, I’ll be presenting at the 24 Hours of Pass (French edition – 2017). This year, I accepted to present two sessions (Webcasts in French). Can you really say “no” twice to Isabelle and Jean-Pierre? The first session is about going high in normal forms for data warehouses. Don’t panic, I won’t be boring explaining in theory what’s a normal form. I just want to explain why Data Vault, Anchor Modeling (and Tibre Modeling, my own “methodology” to model time-based relations in a data warehouse) are really powerful approaches for some specific cases. I’ll also show you why SQL Server is suited to host and serve this kind of model. I’m really excited about this session, it’ll be a worldwide première and I hope to have the opportunity to present it again later, let’s cross-fingers.

The second session at the 24 HOP French Edition is about Graph processing with SQL Server 2017 (aka SQL Graph). That’s now several years that I’m following Neo4j and I appreciate how simple it’s to resolve some use-cases with Cypher and this engine. I still need to develop large parts of my session but the plan is surely not to make an in-depth comparison of the two engines. I’m expecting to explain why a Graph processing is better suited for some specific cases and show the difference in terms of expressiveness and performance with a standard SQL approach.

Advertisements

How to connect to PostgreSQL with NBi?

I often receive questions such as “Does NBi work with PostgreSQL?” or “Is it possible to use another database than SQL Server?”. The answer is definitively: YES! NBi is working with all databases supporting OleDB or ODBC connections.

Yesterday, I edited a blog post for the connection to MySQL, today we’ll see how to connect to PostgreSQL from NBi by using both the ODBC and the OleDB connectors. At the opposite of MySQL, PostgreSQL has at least one ODBC driver and at least one OleDB provider still maintained (and even actively developed).

I’ll start by the ODBC driver for PostgreSQL. You can find the binaries on the official PostgreSQL website. Download them and install this driver. You can check that the ODBC driver has been correctly installed in the “Program and Features” panel:

postgresql-odbc-connector

 

Another great way to check that the ODBC drivers are installed is to user PowerShell and the cmdlet

Get-OdbcDriver

This method returns the following list where you should find something related to PostgreSQL.

postgresql-odbc-driver

This method will also give you the exact name of the driver, something where my google-fu usually miserably fails. In this case I’ve two drivers named PostgreSQL ANSI(x64) and PostgreSQL Unicode(x64). Be sure to use these exact names in your connection strings!

When these connectors are successfully installed, edit your test-suite and define the connection string to your PostgreSQL server as:

<query
   connectionString=
      "Driver={PostgreSQL ANSI(x64)};
       Server=127.17.0.2;
       Database=postgres;
       UId=CI-Build;
       Pwd=Xyz;" 
>
  <![CDATA[
     select 'Hellow world'
  ]]>
</query>

That’s it!

For the OleDB provider, I’ll use the commercial version provided by Intellisoft and available on this page. This product comes with a free trial. Once downloaded and installed, you can check that it has correctly been registered by running the following PowerShell cmdlet:

(New-Object system.data.oledb.oledbenumerator).GetElements()

and it should contain the following result:

postgresql-oledb-provider

Unfortunately, this method doesn’t return the nickname of the provider that we must use in our connection string. In this specific case, this name is PNGP.1.

Because this OleDB provider is not pre-registered by NBi, we need to map it to the OleDb namespace as explained in the documentation of NBi. The config file must be updated with:

<nbi testSuite="PostgreSQL-TestSuite.nbits">
  <providers>
    <add id="PGNP.1" invariant-name="System.Data.OleDb"/>
  </providers>
</nbi>

Once it’s done, you can safely edit your connection string into your test-suite.

<query
  connectionString=
    "Provider=PGNP.1;
     Data Source=127.17.0.2;
     Initial Catalog=postgres;
     User ID=CI-Build;
     Password=Xyz;"
>
  <![CDATA[
     select 'Hello World'
  ]]>
</query>

We could move this connection string to the settings (references or defaults) or to the config file.

You can now configure a connection to a PostgreSQL instance without any problem!

How to connect to MySQL with NBi?

I often receive questions such as “Does NBi work with MySQL/PostgreSQL?” or “Is it possible to use another database than SQL Server?”. The answer is definitively: YES! NBi is working with all databases supporting OleDB or ODBC connections.

Today, I’ll specifically answer the question about MySQL, tomorrow I’ll edit another blog post about PostgreSQL. For MySQL, I’ll connect from NBi by using an ODBC connector. Indeed, MySQL has no support for an OleDB connector. To be 100% transparent, some old versions of OleDB connectors exist but it doesn’t sound to have any active development on these projects.

Start by downloading the latest version of the ODBC driver for MySQL from the MySQL website. Then install it. You can check that the ODBC driver has been correctly installed in the “Program and Features” panel:

mysql-odbc-connector

Another great way to check that the ODBC drivers are installed is to user PowerShell and the cmdlet

Get-OdbcDriver

This method returns the following list where you should find something related to MySQL.

mysql-odbc-driver

Do you need the 32-bits(x86) or 64-bits(x64) drivers? It depends of the application using the ODBC driver … both need to match. When using the nunit-console or the GUI of NUnit, you’ll need the 64-bits version. But if you’re running your tests from Visual Studio then you’ll need the 32-bits version. You can install both on the same computer but it’s apparently recommended to install first the 64-bits version and then the 32-bits. Haven’t checked this personally.

When the connector is successfully installed, edit your test-suite and define the connection string to your MySQL server as:

<query
   connectionString=
      "Driver={MySQL ODBC 5.3 UNICODE Driver};
       Server=127.17.0.3;
       Database=adventureworks;
       User=CI-Build;
       Password=Xyz;" 
>
  <![CDATA[
     select count(*) from adventureworks.address
   ]]>
</query>

We could also use this connection string in our settings (defaults or references) or in our config file.

Now, you can use your MySQL instance exactly the same way you’d use a SQL Server instance.

Regression test-suite for BI solution – Part 2

In the previous blog post of this series, I’ve explained how to query a DMV and how to modify a little bit the set of test-cases generated by the query. In this post, we’ll see how to join two sets of test-cases to obtain a unique set.

Now that you’ve loaded the content of the a first DMV, we can apply the same recipe for the second DMV. Well, It’s not so easy, if we just do this, the second call to case load will override the result of the first query. To keep in memory two sets of test-cases, in genbiL, you’ll have to give them a name. This action is done through the keyword scope (more info). This action must be executed when you want to work on a different set of test-cases.

case scope 'dimensions';
case load query
{
 ...
}
on '...';

case scope 'hierarchies';
case load query
{
 ...
}
on '...';

To specify on which scope you want to apply actions (filter, load, hold, …), you first need to specify the action scope with the name of the scope on which you want to apply actions. It’s possible to switch the scope at any moment and you can o back to a scope previously loaded.

To join the two sets, you’ll need to use the command cross (more info) with a jointure. But, before crossing the two existing datasets, you need to scope on a third empty scope that will contain the result of the cross.

case scope 'dimensions and hierarchies';
case cross 'dimensions' with 'hierarchies' on 'dimension_unique_name';

Now that you’ve understood how to cross (join) two datasets, you can safely load some other DMVs and combine them to build a global picture of your cube. First step, load measures and measuregroups then combine them. Second step, use the DMV returning the intersections between dimensions and measuregroups. If you combine this to the two already combined datasets, you’ll have the interesting interactions between hierarchies and measures. Probably the best starting point to automatically generate queries to apply on your new cube and old cube and compare result-sets. Let’s do this:

Create a new a new template to support comparison query-to-query (save it as regression\my-template.nbitt):

<test name="Compare '$Measure_Caption$' by '$hierarchy_caption$'">
  <description>Assert that the same query, 
  executed on two instances of the cube, 
  returns the same result-set. Specifically, 
  for measure '$measure_caption$' on hierarchy 
  '$hierarchy_caption$'</description>
  <edition author="$username$" created="$now$"/>
  <category>Measuregroup '$measuregroup_name$'</category>
  <trait name="Template">my-template.nbitt</trait>
  <system-under-test>
    <execution>
      <query>
        <![CDATA[
        select 
          $measure_unique_name$ on 0
          , non empty($hierarchy_unique_name$.children) on 1 
        from 
          [$cube_name$] 				
        ]]>
      </query>
    </execution>
  </system-under-test>
  <assert>
    <equalTo keys="all-except-last" tolerance="$tolerance$">
      <query>
        <![CDATA[
        select 
          $measure_unique_name$ on 0
          , non empty($hierarchy_unique_name$.children) on 1 
        from 
          [$cube_name$]
        ]]>
      </query>
    </equalTo>
  </assert>
</test>

Add the following genbiL code to load all the DMV

case scope 'dimensions';
case load query
{
	select
		[Dimension_Unique_Name]
		, [Dimension_Caption]
	from
		[$system].[MDSchema_dimensions]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
		and Dimension_Is_Visible
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';

case scope 'hierarchies';
case load query
{
	select
		[Dimension_Unique_Name]
		, [Hierarchy_Unique_Name]
		, [Hierarchy_Caption]
	from
		[$system].[MDSchema_hierarchies]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
		and Hierarchy_Is_Visible
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';

case scope 'levels';
case load query
{
	select
		[Hierarchy_Unique_Name]
		, [Level_Unique_Name]
		, [Level_Caption]
	from
		[$system].[MDSchema_levels]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
		and Level_Is_Visible
		and [Level_Number]<>'0'
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';

case scope 'measureGroups';
case load query
{
	select
		[MeasureGroup_Name]
	from
		[$system].[MDSchema_measureGroups]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';
case filter distinct;

case scope 'measures';
case load query
{
	select
		[Cube_Name]
		, [Measure_Unique_Name]
		, [Measure_Caption]
		, [MeasureGroup_Name]
		, [Measure_Display_Folder]
	from
		[$system].[MDSchema_measures]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
		and Measure_Is_Visible
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';

case scope 'links';
case load query
{
	select
		[MeasureGroup_Name]
		, [Dimension_Unique_Name]
	from
		[$system].[MDSCHEMA_MeasureGroup_Dimensions]
	where
		[Cube_Name]='your-cube-name'
		and [Catalog_Name]='your-initial-catalog'
		and Dimension_Is_Visible
}
on 'Provider=MSOLAP.4;Data Source=your-data-source;Initial Catalog=your-initial-catalog';

Once we’ve the separated DMV, we can cross them together to obtain our model

case scope 'dimensions-measures';
case cross 'measures' with 'links' on 'MeasureGroup_Name';
case cross 'dimensions-measures' with 'dimensions' on 'Dimension_Unique_Name';
case cross 'dimensions-measures' with 'hierarchies' on 'Dimension_Unique_Name';

Add a field tolerance of 0.001

case add column 'tolerance' values '0.001';

Then, generate the tests based on the test-cases loaded in the newly created scope and the template build above.

template load file 'Regression\my-template.nbitt';
suite generate;

And that’s it.

Now up-to-you to build other queries and filter out the not pertinent test-cases.

SQL Saturday in Vienna and MS Cloud Summit in Paris to start in 2017

I’m sitting in my couch, watching a movie with my daughters, reminiscing about holidays, and getting ready for the first speaking engagements of 2017.

sql-saturday-579-viennaI’m completely thrilled to be invited to Vienna on January 20th to speak at SQL Saturday #579. I’ll be tackling a rather difficult subject … testing BI solutions. During this session, we’ll look to the features of the open-source framework named nbi. This framework is providing support for automated tests on the fields of databases, cubes, reports and ETLs, without the need of .Net skills. The demos will show us the best approaches to quickly and effectively assert the quality of BI developments. We’ll go a step further, generating the tests by an interesting system of templates and test-cases sources.

mscloudsummit-paris-2017I’ll also be speaking (in French) in Paris at the MS Cloud Summit Paris 2017. The topic will be Azure Stream Analytics. This is a completely new session and it’s taking a lot of patience, perseverance, and hard work to get ready (and to be honest it’s not 100% finished). Building the slide deck is the quick and easy part. But designing the demo scenarios, scripting and testing the demos, is the time-consuming part (Then will come the practicing part).

For both conferences, if Business Intelligence is not your cup of tea, there are plenty of other topics to choose from. With sessions geared toward Database Development, Database Administration/Security and even, in Paris, Office 365, Power Flow, Power Apps, Azure functions, … Both conferences offer a wide variety of content for everyone.

See you there?

Create a syntax highlighter for Visual Studio and Visual Code with TextMate Bundle

Yesterday, I decide to create a syntax highlighter for the genbiL language. The end-result is available on this GitHub repository. The goal was to provide a better experience for users editing these files from their favorite code editor. It was a completely new experience for me and it appeared to be much easier than expected.

screenshot

if you’re using genbiL, I recommend that you install it through the following commands:

cd %userprofile%\.vs\Extensions
git clone git://github.com/Seddryck/genbil-tmbundle genbil

Visual Studio and Visual Code use an international standard for syntaxes and snippets: TextMate Bundles. If you want to create a new TextMate Bundle for a given language, start by creating a new folder under %userprofile%/.vs/Extensions with the name of your language and then a subfolder named syntaxes.

In this folder create a new empty file or (much more easier) copy/paste an existing file from another language. The format of this file is a property list, an xml-like file with just a few elements:

  • <key/> will specify the type of content for the next element
  • <string/>specify the value associated to the <key/> defined above
  • <array/>is a container to specify multiples values for the <key/>
  • <dict/>let’s you specify a <key/> and <string/> pair

It sounds weird at the beginning but it’s relatively easy to use.

First action to take is to change or write a few key information:

  • uuid is a unique identifier for this syntax definition. Each new syntax definition must have its own uuid. Don’t re-use them or your editor will not be able to load your syntax.
  • The name of your language. Use a short, descriptive name. Typically, you will be using the programming language’s name you are creating the syntax definition for.
  • The top level scope for this syntax definition is the scopeName. It takes the form source.languageName
  • fileTypes is a list of file extensions. When opening files of these types, Visual Studio will automatically activate this syntax definition for them.
<key>uuid</key>
<string>2BA75B32-707C-11D9-A928-000D93589AF6</string>

<key>name</key>
<string>GenbiL</string>	

<key>scopeName</key>
<string>source.genbil</string>

<key>fileTypes</key>
<array>
   <string>genbil</string>
</array>

After this initial definition, you’ll have to create patterns. Each pattern starts by a match. A match is defined through a regular expression.

Then comes the name. In fact the name is a naming scope and isn’t obvious sometimes. Check the Textmate online manual for guidance on scope names. It is important to re-use the basic categories outlined there if you want to achieve the highest compatibility with existing themes.

Colors from the theme have hardcoded scope names in them. They could not possibly include every scope name you can think of, so they target the standard ones plus some rarer ones on occasion. This means that two colors using the same syntax definition may render the text differently!

Bear in mind too that you should use the scope name that best suits your needs or preferences. It’d be perfectly fine to assign a scope like constant.numeric to anything other than a number if you have a good reason to do so.

<key>areas</key>
<dict>
   <key>patterns</key>
   <array>
      <dict>
         <key>match</key>
         <string>\b(case|template|setting|suite)\b</string>
         <key>name</key>
         <string>storage.type.source.genbil</string>
      </dict>
   </array>
</dict>

Testing a Power BI Desktop solution with NBi

Since the release 1.12 (21-DEC-2015), NBi offers the opportunity to connect to a Power BI Desktop model. Whats does it mean? Simply, that you can interact with the model of your Power BI Desktop solution exactly the same way you’d use a Tabular model hosted by SSAS! You’re able to test the model (existence of tables, columns, …), members of the dimensions (existence, format, ordering, …) and you can execute DAX queries on this model (and compare it to expected results). 100% of feature parity between Power BI Desktop and SSAS Tabular.

How can you achieve this? First, download the last version of NBi and then check how to setup your environment. Once you’re ready, it will be time to define your test-suite.

To define your connection string for a Power BI Desktop solution, you can use the settings element or the connectionString attribute of most of the elements provided by NBi. To explain to NBi that your connection string references a Power BI Desktop solution and not a SQL Server instance, you need to start your connection string by “PBIX = “ followed by the solution’s name (filename without the extension .pbix). At the moment, NBI doesn’t support additional tokens that the PBIX one but it will be improved in a near future.

<settings>
  <default apply-to="system-under-test">
    <connectionString>
      PBIX = My Power BI Desktop Solution
    </connectionString>
  </default>
</settings>

To effectively run your tests, your Power BI Solution must be started! Behind the (not so) closed doors, Power BI Desktop will start a local instance of Tabular SSAS. NBi will detect this instance based on the provided solution’s name and calculate a correct and more complex connection string.

Naturally, you can manually start your PBI Desktop solution and it will be the case when developing your solution. But if your setup is more advanced and you’re on a build or tests server then you’ll need to add some additional steps.

NBi offers a way to the start your Power BI Desktop solution. To do this you need to use the setup element in a test or group.

One of the setuptask is exe-run and the common use is to start some small exe or batch files preparing the environment. In this case we’ll use it to start the Power BI Desktop solution. You need to provide the file path for your pbix solution.

<setup>
  <tasks run-once="true" parallel="false">
    <exe-run name="My Power BI Desktop Solution.pbix"
      path="..\PowerBiDesktop\"/>
  </tasks>
</setup>

This solution has a first problem: the task exe-run will return quickly and the Power BI Desktop solution will not be fully started. At the moment the best way to work-around this issue is to introduce a wait task after the exe-run and ask to wait for 10 seconds.

<setup>
  <tasks run-once="true" parallel="false">
    <exe-run name="My Power BI Desktop Solution.pbix" 
      path="..\PowerBiDesktop\" />
    <wait milliseconds="10000" />
  </tasks>
</setup>

The next problem will occur when you try to run your test-suite many times. You can’t start multiple times the same pbix file. So before proceeding to the exe-run task, we’ll need to ensure that we have no Power BI Desktop solutions open. To do this we’ll kill all the processes corresponding to a Power BI Desktop solutions with the task exe-kill. Note that it will kill all the Power BI Desktop solutions … don’t use this when developing on your laptop!

<setup>
  <tasks run-once="true" parallel="false">
    <exe-kill name="PBIDesktop" />
    <exe-run name="My Power BI Desktop Solution.pbix" 
      path="..\PowerBiDesktop\" />
    <wait milliseconds="10000" />
  </tasks>
</setup>

Feel free to provide feedback by reporting bugs or requesting new features.

SSAS measures’ format and Excel

For one of the cube I’m working on, we needed to format a measure by adding the suffix k€. The front-end is Excel 2010/2013.

My colleague was in charge of this part of the development and had defined a FORMAT_STRING for the measure, as we usually do: FORMAT_STRING ="#,##0 k€". I was in charge of the testing, with NBi, and I had defined a test for the format of a cell and this test was successful. Sun shinning.

A few days later, I received a screenshot of an Excel sheet and my eyes were caught by a poor formatting. In place of an expected 1.075 k€ I had an awful 1075,2357777. My first move was to connect with SSMS and I ran the equivalent MDX query and surprise … the format was correct 1.075 k€. Back to an Excel sheet, connected to this cube, and indeed the format was wrong 1075,2357777. Confirmation from the cube definition, the format was defined and confirmation from NBi the format was correctly applied. WTF? Why Excel wasn’t able to display the value correctly formatted when other tools were able?

Since the development of NBi, I knew that SSAS is returning a VALUE and a FORMATTED_VALUE. NBi is able to retrieve this FORMATTED_VALUE to perform a test of a cell’s format. The documentation of SSAS is describing the relation between FORMATTED_VALUE and FORMAT_STRING:

FORMAT_STRING: The formatting template to be applied to the value of the cell to generate FORMATTED_VALUE property

I decided to use the SSAS profiler to retrieve the query emitted by Excel. The query wasn’t really complex but the CELL PROPERTIES were a bit embarrassing: no trace of FORMATTED_VALUE just FORMAT_STRING. Based on this observation, a search on Google confirmed that Excel is retrieving the FORMAT_STRING and is applying the format by itself (in place of relying on the FORMATTED_VALUE returned by SSAS).

Apparently Excel and SSAS don’t have the same interpretation of the FORMAT_STRING.  A miracle occurred and I took the decision to place a backslash before the k: #.##0 \k€. The format was still correct in SSAS and suddenly a k was also visible in Excel … a few seconds later , I added another backslash in front of the euro symbol and the value was correctly formatted in Excel and SSAS.

Based on  SSAS documentation, the backslash is not needed and indeed SSAS is correcty managing the FORMAT_STRING without the backslash. But apparently Excel is less tolerant and the backslash is mandatory in this kind of formatting.

 

SQL Konferenz 2016

I am privileged and happy to announce that for second year in a row, I will be speaking at the SQL Konferenz event in Darmstadt (Gemany). The two main conference days will be on 24 and 25 February with keynote speeches and 4 to 5 parallel tracks addressing all the news about the SQL Server.  A pre-con day is scheduled on the 23rd with three interesting topics: “Agile data warehousing”, “SSIS performance design patterns” and “Data mining algorithms in SSAS, Excel, R, and Azure ML”.

On my side, I will present about SSDT – SQL Server Database Tool, the database tool (not BI part). The session will be demo intensive and will cover topics such as the creation of custom static code analysis rules, usage of T4 templates, build and deploy contributors and continuous integration.

Will I see you there?

Speaking at Journées SQL Server 2015

I’ll be speaking (in French) about Power BI during the next Journées SQL Server 2015 at Paris (To be 100% honest it’s not Paris but Issy-les-Moulineaux … at around 100m of Paris).

Bandeau2015_5eEdition

This is great event with a lot of really interesting topics: New features of SQL Server 2016 (SQL Engine and BI), Power BI, BI in the Cloud and some DBA tips & tricks.

My session will be about the architecture and new features. I’ll explain the different components of the Power BI suite, their roles and what you can achieve with them: especially with DirectQuery, Live SSAS, scheduled refreshes, Personal Gateway, the API and some others not so well known features. I’ll also take a few minutes to introduce new (killing) small but productive features introduced in the last releases and that you could have not seen.

If you want to spent a day with Power BI, note that you’ve the possibility to have three additional sessions about Power BI: Data Viz, Dev and Embedded.

Personally in addition to the two keynotes (I should introduce the Power BI track during the keynote of the second day) I plan to follow some sessions:

  • “Always On 2016: Que va-t-il se passer” by Nicolas Soukoff [t]. To be honest, I’m surely not a specialist of AlwaysOn, it’s clearly not my domain. Will surely learn many things.
  • “Azure Data Warehouse & Data Lake” by Romain Castères [t] & Frank Mercier [t]. I’m not professionally working these two topics but I’m sure it will come in a near future and I prefer to stay relatively up-to-date with these technologies
  • “REX Déploiement continu OLTP & SSIS” is also on my short list but at the same moment that “Boîte à outils pour le MDX”. I’ll surely have to make a hard choice.

See you there?