Using Entity Framework Core Value Converters for Partial SQL Server JSON Support

Overview

First, let’s make sure I don’t set expectations too high. Even though SQL Server 2016 and up has excellent support for JSON formatted data and even though Entity Framework Core’s provider system allows for providers to be created that work with document databases, the SQL Server provider does not yet have first class JSON support. Though lacking in integrated support, we can still work with JSON data in SQL Server using Entity Framework Core Value Conversion (new in EF Core 2.1).

As you’ll see below there isn’t much that needs to be done to support JSON formatted data in your relational object structure. This is due both to the value converters themselves as well as EF Core’s ability to smartly determine what logic can be translated for remote queries (SQL) and what will have to be done in memory (client evaluation). This is important because Entity Framework Core will not know how to translate queries for objects stored as JSON text to SQL and will have to use client evaluation.

This is directly contrasted to what is available in Entity Framework 6 which attempts to translate the full query for remote execution and will throw a runtime exception if its unable to do so. Developers have to be knowledgeable about which statements are supported and structure their queries accordingly which results in inelegant inclusions of .ToArray() or .ToList() embedded in their queries to split the remote and client evaluated portions.

If a developer wants to avoid client side evaluations, EF Core will by default log a warning that a client evaluation is needed so developers can restructure their data or queries. They can go a step further to treat those warnings as exceptions which would then make the behavior similar to EF 6 resulting in a runtime exception when EF Core would attempt to use client evaluation. A lot of thought has been put into making EF Core easy to use while providing access to information developers need to evaluate performance.

With all that said, let’s jump into Value Converters and how we can use them to serialize our objects as JSON formatted strings without jumping through hoops.

 

Getting Started

You will need an application that can use Entity Framework Core 2.1 in order to use value conversion. The type of application doesn’t matter otherwise. The code listings and log output in this post came from an ASP.NET Core 2.1 web application but this works just as well with console applications.

Setting up your models and context is done as normal for a code first solution until just before you generate creation scripts for your database or run the code for the first time against an existing database. For scaffolding a context from an existing database, some extra work is needed to replace the string properties mapped to the nvarchar(max) columns that hold JSON data with the classes that will map to the JSON data itself. There is no sample project to reference since the necessary code is small and easy to understand. I’ve included the code listings for the important classes instead.

 

Creating the Value Converter

There are generally two options for creating the value converter. Taking a look at the code listing for the DemoDbContext.cs class you will see both. The first option is a bit more readable in the context class in my opinion and is easily repeated for more properties should you want to have several properties serialized into JSON. It extracts the implementation logic to a separate class which, if you look at the code, is less readable than Option 2 but has the additional benefit of making it convenient to set a standard set of JSON serializer settings to be used on all the properties you want converted.

Option 2 has the benefit of being less work and less code overall while showing what conversion is taking place. Applying JSON serializer settings would only impact the readability a little bit and the code can be copied and pasted for any additional properties you want converted to JSON instead of being mapped to their own tables. Personally I prefer to use the separate class especially if I am going to use it for more than one property if for no other reason than the convenience of ensuring the same JSON serializer settings will be used on all the properties and it reduces duplicated implementation logic.

DemoDbContext.cs

using EFJsonValueConverter.Models.ValueConverters;
using Microsoft.EntityFrameworkCore;
using Newtonsoft.Json; //For option 2 below

namespace EFJsonValueConverter.Models
{
    public class DemoDbContext : DbContext
    {
        public DemoDbContext(DbContextOptions<DemoDbContext> options)
            : base(options)
        { }
        
        public DbSet<Form> Forms { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            base.OnModelCreating(modelBuilder);

            //Option 1: Using a value converter in a separate class
            modelBuilder.Entity<Form>()
                .Property(p => p.ResponseData)
                .HasConversion(JsonValueConverter<ResponseData>.Default);

            //Option 2: Putting the value conversion logic directly in DbContext
            //With this method you do not need EFJsonValueConverter.cs at all.
            modelBuilder.Entity<Form>()
                .Property(p => p.ResponseData)
                .HasConversion(
                    entity => JsonConvert.SerializeObject(entity),
                     value => JsonConvert.DeserializeObject<ResponseData>(value));
        }
    }
}

 

EFJsonValueConverter.cs

using Microsoft.EntityFrameworkCore.Storage.ValueConversion;
using Newtonsoft.Json;

namespace EFJsonValueConverter.Models.ValueConverters
{
    public class JsonValueConverter<TEntity> : ValueConverter<TEntity, string>
    {
        public JsonValueConverter(JsonSerializerSettings serializerSettings = null,
                                  ConverterMappingHints mappingHints = null)
            : base(model => JsonConvert.SerializeObject(model, serializerSettings),
                   value => JsonConvert.DeserializeObject<TEntity>(value, serializerSettings),
                   mappingHints)
        {
            //No ctor body; everything is passed through the call to base()
        }

        public static ValueConverter Default { get; } = 
            new JsonValueConverter<TEntity>(null, null);

        public static ValueConverterInfo DefaultInfo { get; } =
            new ValueConverterInfo(typeof(TEntity),
                typeof(string),
                i => new JsonValueConverter<TEntity>(null, i.MappingHints));
    }
}

Note that the DefaultInfo property is called for in the EF Core documentation but the Default property is not. That’s just a convenience property to use in the DB context class in the HasConversion(...) call and isn’t required. Just move the property body into the HasConversion(...) call instead.

 

Form.cs

using System;
using System.ComponentModel.DataAnnotations;

namespace EFJsonValueConverter.Models
{
    public class Form
    {
        public long Id { get; set; }

        [Required]
        public bool IsDeleted { get; set; } = false;

        [Required, StringLength(50)]
        public string DisplayName { get; set; }
        
        [Required]
        public DateTimeOffset Created { get; set; }

        [Required]
        public DateTimeOffset LastModified { get; set; }

        public int SchemaCompatabilityLevel { get; set; }

        //Serialized as JSON string using HasConversion(...)
        public ResponseData ResponseData { get; set; }
    }
}

 

The properties in ResponseData.cs are not important here so I’ve omitted the code listing. The class just needs to conform to whatever functionality the JSON serializer can support.

 

Query Impact

As mentioned earlier, EF Core will not understand how to translate LINQ queries in your code to compatible SQL for properties that are converted to JSON. Even if the relational database you are using supports querying JSON directly, EF Core does not know that’s what you’re translating your object into; it just sees you’ve created a string. But since EF Core can handle client side evaluation automatically, it appears like it actually is making that leap. It’s important to remember this because in order for the query to be evaluated on the client, all the candidate records that need to be searched must first be retrieved and materialized. Only after that has happened can EF Core evaluate any queries using objects which are being converted to JSON for storage. If your primary query logic depends on the data inside the JSON object(s), performance will plummet as more data is added to the database.

One way to reduce the impact of client side evaluation would be to identify the properties you most query on and ensure they are in an object that maps to a table directly, promoting (or duplicating) them from the JSON object as necessary.

If that approach is not feasible, you could create custom stored procedures that leverage the JSON support in SQL Server and expose those through your context. That option requires you to know which properties are actually being serialized to JSON and know which stored procedures are the right ones to use. This method is a bit more work overall and could lead to unexpected results if the schema changes on the JSON data frequently. Understanding how SQL Server operates on JSON formatted data would be important if you plan to leverage this option.

To demonstrate the client side evaluation impact, let’s take a look at some queries.

var forms = _context.Forms.Where(f => f.ResponseData.IsRush);

In this example ResponseData is a property of the Form class and is being serialized to JSON via a value converter. Since ResponseData looks like a string to EF Core, it doesn’t know how to generate the SQL necessary to query for the property called IsRush directly. Instead it’s going to retrieve all the records in the Forms table and materialize them in memory. The value converter kicks in when the objects are being materialized and converts the JSON strings to the ResponseData objects. Once it has all the data it can then evaluate the query and return the records containing a true value for IsRush. Here is the log output for the query:

Microsoft.EntityFrameworkCore.Query:Warning: The LINQ expression 
'where [f].ResponseData.IsRush' could not be translated and will be 
evaluated locally.

Microsoft.EntityFrameworkCore.Database.Command:Information: Executed 
DbCommand (31ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [f].[Id], [f].[Created], [f].[DisplayName], [f].[IsDeleted], 
[f].[LastModified], [f].[ResponseData], [f].[SchemaCompatabilityLevel]
FROM [Forms] AS [f]

Notice the SQL query doesn’t have any mention of the IsRush property but if you ran this code against data you’d see as I did that the results were correctly filtered. Obviously this could be problematic if you have a lot of data. Bringing back your entire set of data to evaluate a property in memory is not a good idea. To mitigate this, limit the records in some other way to reduce the number coming back.

 

var forms = _context.Forms.Where(f => !f.IsDeleted && f.ResponseData.IsRush);

EF Core can translate this into two queries. First it will generate the SQL to bring back only forms which are not marked as deleted then query the result set in memory for those that need to be rushed. This approach works if the properties you’re primarily querying against are mapped to table columns and the JSON data only needs to queried to make a small refinement in the results.

Microsoft.EntityFrameworkCore.Query:Warning: The LINQ expression 
'where [f].ResponseData.IsRush' could not be translated and will be 
evaluated locally.

Microsoft.EntityFrameworkCore.Database.Command:Information: Executed 
DbCommand (26ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [f].[Id], [f].[Created], [f].[DisplayName], [f].[IsDeleted], 
[f].[LastModified], [f].[ResponseData], [f].[SchemaCompatabilityLevel]
FROM [Forms] AS [f]
WHERE [f].[IsDeleted] = 0

 

Another approach would be to look at the types of queries you’re doing. If looking for records that need to be rushed is a common activity, see if the IsRush property can be promoted to the containing class. In this example that would mean storing IsRush on the Form class instead of in ResponseData.

var forms = _context.Forms.Where(f => f.IsRush);

Now EF Core can generate the SQL directly and bring back only the records you want. If that option is not possible, creating a function or stored procedure you can call becomes a bit more effort but can enable you to make more complex queries for things you do often.

Microsoft.EntityFrameworkCore.Database.Command:Information: Executed 
DbCommand (21ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [f].[Id], [f].[Created], [f].[DisplayName], [f].[IsDeleted], 
[f].[LastModified], [f].[ResponseData], [f].[SchemaCompatabilityLevel], 
[f].[IsRush]
FROM [Forms] AS [f]
WHERE [f].[IsRush] = 0

 

Wrapping Up

One overlooked aspect to what we’ve done above is that we don’t necessarily require SQL Server to support JSON. Entity Framework Core is doing all the serialization and deserialization and querying in the simplest use cases. Where SQL Server support comes in handy is if you find you absolutely must be able to query into the JSON data on the server. Adding stored procedures or functions which you can call through Entity Framework would open that possibility up while waiting for JSON querying support to land in a future version. As a cautionary note, I’m not sure if JSON support has been committed to for any particular release milestone but it’s because there has been such extensive discussion around the right way to achieve it that I’m sure it’s a matter of “when” and not “if” it will come.

So who should use this? First anyone who was using [NotMapped] properties to wrap Entity Framework entities to inject JSON serialization and deserialization would benefit from this cleaner approach. Anyone who has determined they have that need would find this approach simple and removes public properties that shouldn’t be accessed directly which support that conversion. Other developers who have an object that is expected to change frequently and don’t want to continuously alter their database schema could also benefit from this approach especially if the effort to introduce a separate document database is too high or not feasible for their environment. This approach should be used sparingly though; developers shouldn’t go looking for ways to use this if there is another acceptable way to achieve their goals.

Entity Framework Core has come a long way in a short amount of time and the possibilities for developers continue to grow. Its design is flexible and extensible and the provider approach opens it up to new usage scenarios that were not possible previously. I am eagerly awaiting query support for JSON formatted data in the relational or SQL Server provider but in the meantime I know I can fall back on value converters and other features of EF Core to get the job done.

Publishing .NET Core App with Runtime

Background

In a previous post I talked about the ability for developers to publish their applications with a local copy of .NET Core which allows the application to be run on systems without a runtime installed at a system level. Besides making it convenient to deploy to systems you don’t control and ensure it will run, it also means your application uses the exact version you’ve tested with and know no other applications will affect it with updates or other changes.

The rest of this post shows publishing an ASP.NET Core application and run from both Windows and Ubuntu (via WSL). The Windows environment has .NET Core already installed at the system level but the demo Ubuntu environment does not. To demonstrate I used the Angular template with a minor change to display the current environment from which it is running.

NewSlnAndWebProj

Publishing for Global Runtime

When an application is published without specifying a runtime identifier (via –runtime or -r on the command line) it is built with the expectation it will be run on systems with a pre-existing install of .NET Core at the system (or global) level capable of running it.

dotnet publish -c Release

PublishRelease

The result is a folder containing the files generated from executing all the build stages in the application. Generally the entry point, if not altered, will be the name of the project with the DLL extension and can be run using the following command on the command line. In our example that would be AppLocalDeployment.Web.dll.

File Explorer view of a project publish folder showing 22 files using 1.19 megabytes of space.

With the publish complete and the correct entry point file located, we can run the application using the pre-installed .NET Core runtime.

dotnet AppLocalDeployment.Web.dll

RunUsingGlobalFramework

BrowserAppWindows

Attempting to run an application on a system without a compatible .NET Core runtime already installed will result in errors.

RunUbuntuNoFrameworkFails

Publishing for Application Local Runtime

When an application is published and includes a runtime identifier (via –runtime or -r on the command line) the result is a folder containing the same files generated from executing all the build stages in the application plus the necessary platform-specific supporting files as well as the .NET Core runtime.

dotnet publish -c Release -r ubuntu-x64

PublishWithRuntimeUbuntu

PublishWithRuntimeUbuntuDone

Notice the publish folder is now under a subfolder with the same name as the target runtime identifier. Taking a look at the publish folder now with the runtime, the files and total size and number of files has grown significantly.

File Explorer view of a project publish folder showing 367 files using 94.6 megabytes of space.

With the runtime packaged we no longer use the dotnet command. We can launch the application directly as shown below.

RunUbuntuWithAppLocalFrameworkSuccess

BrowserAppUbuntu

Looking Forward

To address the size and file count inflation for application local deployments, new build tools are being developed. These tools will reduce the size of the deployment package by linking only the used APIs and cutting the rest out. This is similar to the way Xamarin tools reduce app sizes for mobile devices by doing the same. They should also make releasing an application easier by packaging it all into a single executable. Being able to include a runtime without greatly increasing the size (depending on the number of APIs used) and having a single file as a result is an incredible deployment story for developers.

Demystifying .NET Core 3 “Desktop Packs”

With .NET Core 3 Microsoft plans to release a set of Windows Desktop Packs which promise to enable developers to port or create desktop applications on .NET Core with WPF, UWP or even WinForms. These packs will only work on compatible Windows targets and won’t be cross platform but they make it possible for developers to choose to use .NET Core instead of the .NET Framework for their projects.

Graphic depicting the separation between .NET Core cross platform features and the Windows-only Desktop Packs

The Desktop Packs are separate from the cross platform components of .NET Core 3. Source.

You might ask why, if the applications can only run on Windows, would someone bother to use .NET Core instead of the .NET Framework when the major selling point of .NET Core is that it’s cross platform. For some developers it might not make sense to make the switch but others might appreciate the differences in .NET Core from which they could benefit. Performance improvements, the ability to package a runtime with an application and all the other great differentiating factors of .NET Core become accessible except for cross platform execution.

What it’s not

  • Windows desktop graphical applications running on Linux, macOS, etc.
  • Bloating the cross platform runtime to include Windows-specific APIs of no value to Linux, macOS, etc.

What it is

  • Optional, additional components that can be used on compatible Windows platforms
  • A bridge to bring Desktop applications over to .NET Core
  • A method to progressively modernize applications without requiring a full porting operation or rewrite

Why bother if it’s only on Windows anyway

  • .NET Core can outperform the .NET Framework in a few benchmarks and continues to improve in speed and memory use
  • .NET Core applications can be built to run against a system-wide installed version of the framework or they can be published with the specific version of the runtime they’ve tested against for target platforms, also called App Local deployments.
  • App Bundling, when released, promises to shrink applications with bundled runtimes and create a single EXE for simplified deployments
  • Access to additional new ways of working using simplified project formats and access to command line tools

Wrap up

If you’ve been putting off exploring .NET Core because the feature parity hasn’t been there with .NET Framework I think it’s safe to say you can start exploring now. .NET Core is small and easily installed and uninstalled making it a very low risk to get started on any environment including your primary development machine. With .NET Core 3 preview coming later this year and release set for 2019 now is a good time to start getting familiar with the new capabilities and explore the tooling.

.NET Core Global vs App Local Deployment

Global Deployment Model

The full .NET Framework has always been installed at the global system level. A few versions can exist side by side but in version 4 and above, new installs update the existing libraries in place. The benefit to this approach is that libraries can be updated to address security vulnerabilities or to include performance improvements. The downside, as we’ve seen, is that all applications are affected by the upgrade which could, despite extensive testing and validation, include a regression or a change that has adverse affects on existing applications. For software vendors this is especially troubling as their products may have been tested and validated against a specific version of the framework and now must investigate reports from their customers of crashes or other side effects.

With .NET Core applications can continue to be developed in the same manner as with the .NET Framework. They can be written and compiled against a specific version of .NET Core and then released expecting the systems on which they are deployed to have a compatible version. Applications can automatically roll forward to a newer version of the framework if the exact version is not present. This method is convenient and offers an opportunity for applications to use updated .NET Core installations that may contain performance improvements or bug fixes but it also exposes the application to a possibly untested configuration and might impact its stability or performance. Fortunately there is another option where tight control of the framework version is desirable or if the presence of the correct .NET Core version can’t be expected.

App Local Deployment Model

With app local deployments, applications can be developed and deployed with a specific version of .NET Core which will not be shared with nor impacted by other versions of .NET Core that may be on the target systems. The benefit here is that applications can be developed and tested against a specific version of .NET Core just as applications always have been but now there is the additional assurance that target systems will run that version. Support becomes a little bit easier both from the initial install satisfying prerequisite conditions through ongoing use. When a new version of the application is released, the packaged runtime can also be updated if desired.

This approach means applications also won’t automatically use updated .NET Core APIs in new versions that might have had performance improvements. Unexpected performance improvements for free then are out of the picture without pushing a new version with the updated framework packaged.

Which Model Is Right?

As with everything the answer is “it depends.” (Did you really expect a more straightforward answer?) It comes down to what conditions are most important.

If you do not control the target systems and can’t guarantee the expected version of .NET Core is present or if unexpected changes to the behavior of the application (regardless of how small the chance) then packaging the runtime with the applications is probably the better option.

If you can be assured of the presence of the correct .NET Core version on target systems or if the size of your application deployment is a paramount concern or you don’t want to publish and maintain target system-specific versions of your application then using the global version of the runtime might be the best option.

Looking Forward

A new set of tools currently being referred to as App Bundler should help those who want to publish their application as a single EXE while keeping the size small. The toolset would have a function similar to what Xamarin does to link libraries and reduce the size by cutting out things that aren’t used by the application. This reduces the overall size which is very important where Xamarin is used most – on mobile devices. Bringing this functionality to .NET Core could provide a better balance between global and app local deployments. The information on this is still sparse at this stage but for now it’s anticipated to follow the .NET Core 3 release timeline. I don’t see that the bundler would specifically need .NET Core 3 to function so it’s possible it could lead or, more likely, follow the .NET Core 3 release date which is currently set for beta later this year and full release in early 2019.

You can find more information on the Runtime IDs for supported architectures and operating systems on Microsoft Docs.

First Experiences – VS App Center

Background

Recently I was tasked with creating a mobile app for an internal company need. Normally I’d default to web applications with responsive UI for different devices but in this case there was a core requirement to support push notifications. The timeline was “as soon as possible” and that included picking a platform (two leading choices were Cordova/Ionic and Xamarin), learning the toolchain, integrating push notifications, ensuring it works even after being wrapped in a security product (Citrix XenMobile MDX Toolkit – I still don’t know which of these words is the most important in this case so I include them all).

App Technology Stack

Since I’m more comfortable with C# compared to web frameworks I started with Xamarin. I’ve attended some user group talks that used Xamarin and knew Xamarin Forms was probably the right starting point. I only had to support iOS but Android was a nice to have since we have many users with Android devices.

Visual Studio App Center

After trying a few different options based on previous knowledge (Azure Notification Hubs, Azure App Services) and things discovered (Google Firebase for Android and iOS) to various degrees of success, I stumbled on Visual Studio App Center. I initially started my searching in early February 2018. App Center had only been announced during the Connect(); 2017 conference a few months prior in November. Because it was so new under that name it was not as high in search results when looking for push notifications with Xamarin. That’s really unfortunate because App Center is an integrated offering of previous solutions that already have great support and adoption.

Continue reading

First Experiences – Xamarin Development

Background

I was recently tasked with building a mobile app for my company that received push notifications and directed users to internal resources (via XenMobile SecureWeb). I have looked at mobile apps at various points in the past and tried Xamarin, Cordova, Ionic and even native Swift and Java. I struggled to get anything of value implemented so I never went further. That left me starting from scratch to figure out how I’d satisfy the requirement.

I had passively been attending Greg Shackles’ NYC Mobile .NET User Group and had some idea that Xamarin Forms was probably my best bet for a cross platform solution. I am not particularly fond of XAML but the application itself intentionally had almost no UI so that was not a concern.

My exploration meandered a bit while I tried things I knew about (Azure App Services, Azure Notification Hubs) and then things I leaned about by extension (Firebase) before finally settling on exactly what I needed: Visual Studio App Center. It’s a lightweight solution that offers analytics, crash reporting and push notifications. The configuration was straightforward for Push notifications and just worked.

With App Center I had the infrastructure set up and configured and could focus on the functionality of the app. The rest of this is all about my experience using the tooling to build and test that application on iOS and Android devices.

Continue reading

Rapid Prototyping with “Napa” Office 365 Development Tools

As a developer there are times I want to quickly throw something together to see if an idea I have is workable. I don’t need to go through all the ceremony of creating a project with necessary dependencies, good architecture, etc. if all I want to know is if I can access some service. For most things I can fire up a quick test project using either Console or Web project templates and have my answer in a few minutes. For SharePoint though, this has never been very easy.

Introducing Napa Tools

“Napa is a great way to get started building Office Add-Ins right in a browser window. You don’t need to install any tools such as Visual Studio. All you need is an Office 365 account and a supported browser.”

With previous versions of SharePoint, server-side code was really the only way to go to create custom SharePoint solutions. Developing for SharePoint was tricky as it required a local install in order to build and package solutions. This made throwing together test harnesses very slow to prove out a concept. With the introduction of client side libraries and services, extending SharePoint with custom code has become much easier.

Even though developing for SharePoint doesn’t mean server side code, it still has packaging requirements. That’s where Napa comes into play. Using an online editor which resembles lightweight editors like Sublime and Visual Studio Code, anyone with a modern browser can throw together a quick test or demo without a local install of SharePoint or even worrying about manifests and elements files. The editor provides syntax highlighting, jump to definition, basic Intellisense capabilities, and saves your work automatically.

7.DefaultFullPageContent

“Napa” Office 365 Development Tools Online Editor

Continue reading

Windows 8 and Wireless Display (WiDi) Not Working

Devices Charm showing the option to add a wireless display.

Wireless Display Support in Windows 8

I just spent way too many hours trying to figure out why every blog post on the subject of using a wireless display claims there was a link to “Add a wireless display” in the Devices charm under the Project option but not on my machine. I recently bought a Samsung UN55H7150 television that had screen mirroring capability and my HP TouchSmart 15t laptop claimed to support wireless displays but for some reason I couldn’t make them work.

If you’re having this problem, I’ll save you the effort of figuring it out: you’ve got a virtual network driver installed that is mucking with this capability. I ran across a reference in a blog saying they had trouble after installing their company VPN software. I thought maybe it might be that but then I realized this hasn’t worked since I installed Oracle VirtualBox. I tried disabling the driver but that didn’t seem to do anything. After uninstalling VirtualBox completely, the option was again available.

I can’t tell you how many hours over how many days I lost to this as I would attempt to figure it out, get frustrated, give up and then try again another day but it was too many.

If the “add a wireless display” option is missing from your Windows 8 machine and you’re sure you have hardware capable of supporting WiDi or Miracast, look at your installed software and remove anything that added a networking driver.

Frustratingly yours,
-Erik