.NET 6 History Super Complete Raiders

Time:2022-11-25

Welcome to .NET 6. Today’s release is the culmination of over a year of hard work by the .NET team and community. C# 10 and F# 6 provide language improvements to make your code simpler and better. Performance has improved dramatically, and we’ve seen Microsoft lower the cost of managed cloud services. .NET 6 is the first version to support Apple Silicon (Arm64) natively, and has also been improved for Windows Arm64. We built a new dynamic profile-guided optimization (PGO) system that provides deep optimizations only possible at runtime. Improved cloud diagnostics using dotnet monitor and OpenTelemetry. WebAssembly support is more capable and performant. Added new APIs for HTTP/3, handling JSON, math, and directly manipulating memory. .NET 6 will be supported for three years. Developers have started upgrading applications to .NET 6, and we’re hearing great early results in production. .NET 6 is ready for your application.

You can download .NET 6 for Linux, macOS, and Windows.

  • Installer and binaries
  • container image
  • Linux packages
  • release notes
  • API differences
  • known issues
  • GitHub issue tracker
    See ASP.NET Core, Entity Framework, Windows Forms, .NET MAUI, YARP, and dotnet monitor posts for new features in various scenarios.

.NET 6 Highlights

.NET 6 is:

  • Use Microsoft services, cloud applications run by other companies, and open source projects for production stress testing.
  • Supported for three years as the latest Long Term Support (LTS) release.
  • A unified platform across browser, cloud, desktop, IoT and mobile applications, all using the same .NET libraries and enabling easy code sharing.
  • Performance has been greatly improved, especially file I/O, which collectively leads to reductions in execution time, latency, and memory usage.
  • C# 10 offers language improvements such as record structures, implicit uses, and new lambda capabilities, while the compiler adds an incremental source generator. F# 6 adds new features, including task-based async, pipeline debugging, and numerous performance improvements.
  • Visual Basic has improvements in the Visual Studio experience and the Windows Forms project opening experience.
  • Hot Reload enables you to skip rebuilding and restarting your application to see new changes (while your application is running), Visual Studio 2022 and .NET CLI supports C# and Visual Basic.
  • Cloud Diagnostics has been improved with OpenTelemetry and dotnet monitor, and is now supported in production and available in Azure App Service.
  • The JSON API is more powerful and more performant via the serializer’s source generator.
  • The minimal API introduced in ASP.NET Core simplifies the getting started experience and improves the performance of HTTP services.
  • Blazor components can now be rendered from JavaScript and integrated with existing JavaScript-based applications.
  • WebAssembly AOT compilation of Blazor WebAssembly (Wasm) applications, with support for runtime relinking and native dependencies.
  • Single-page applications built with ASP.NET Core now use a more flexible pattern that works with Angular, React, and other popular front-end JavaScript frameworks.
  • HTTP/3 was added so that ASP.NET Core, HttpClient, and gRPC can all interact with HTTP/3 clients and servers.
  • FileIO now supports symlinks and greatly improves performance with re-written-from-scratch FileStream.
  • Security has been improved with support for OpenSSL 3, the ChaCha20Poly1305 encryption scheme, and runtime defense-in-depth mitigations, notably W^X and CET.
  • Single-file applications can be published (extract-free) for Linux, macOS, and Windows (previously only Linux).
  • IL trimming is now more powerful and efficient, and new warnings and analyzers ensure correct end results.
  • Source generators and analyzers have been added to help you generate better, safer, and more performant code.
  • Building from source enables organizations such as Red Hat to build .NET from source and provide their own builds to their users.

This release includes about ten thousand git commits. Even though this article is long, it skips many improvements. You must download and try .NET 6 to see all the new features.

support

.NET 6 is a Long Term Support (LTS) release and will be supported for three years. It supports multiple operating systems including macOS Apple Silicon and Windows Arm64.

Red Hat is working with the .NET team to support .NET on Red Hat Enterprise Linux. On RHEL 8 and later, .NET 6 will be available for AMD and Intel (x64_64), ARM (aarch64), and IBM Z and LinuxONE (s390x) architectures.

Please start migrating your applications to .NET 6, especially .NET 5 applications. We’ve heard from early adopters that upgrading from .NET Core 3.1 and .NET 5 to .NET 6 is simple.

Visual Studio 2022 and Visual Studio 2022 for Mac support .NET 6. It is not supported by Visual Studio 2019, Visual Studio for Mac 8 or MSBuild 16. If you want to use .NET 6, you need to upgrade to Visual Studio 2022 (now also 64-bit). The Visual Studio Code C# extension supports .NET 6.

Azure App Service:

Azure Functions now supports running serverless functions in .NET 6.
The App Service .NET 6 GA Announcement provides information and details for ASP.NET Core developers excited to start using .NET 6 today.
Azure Static Web Apps now supports full-stack .NET 6 applications with a Blazor WebAssembly front end and Azure Function API.
Note: If your app is already running a .NET 6 preview or RC release on App Service, it will automatically update on the first restart after the .NET 6 runtime and SDK are deployed to your region. If you deployed a standalone application, you will need to rebuild and redeploy.

Unified extension platform

.NET 6 provides a unified platform for browser, cloud, desktop, IoT and mobile applications. The underlying platform has been updated to meet the needs of all application types and facilitate code reuse across all applications. New features and improvements are available across all apps simultaneously, so your code running in the cloud or on your mobile device behaves the same way and has the same benefits.

.NET 6 History Super Complete Raiders

The pool of .NET developers continues to expand with each release. Machine learning and WebAssembly are two recent additions. For example, with machine learning, you can write applications that find anomalies in streaming data. With WebAssembly, you can host .NET applications in the browser just like HTML and JavaScript, or mix them with HTML and JavaScript.

One of the most exciting new additions is the .NET Multi-platform App UI (.NET MAUI). You can now write code in a single project to deliver a modern client application experience across desktop and mobile operating systems. .NET MAUI will be released slightly later than .NET 6. We’ve invested a lot of time and effort into .NET MAUI and are excited to release it and see .NET MAUI applications in production.

Of course, .NET applications are also available at home using the Windows desktop (using Windows Forms and WPF) and in the cloud using ASP.NET Core. They’re our longest-serving application type and are still so popular that we’ve improved them in .NET 6.

Targeting .NET 6

Continuing with the broad platform theme, it’s easy to write .NET code on all of these operating systems.

To target .NET 6, you need to use the .NET 6 target framework as follows:

<TargetFramework>net6.0</TargetFramework>
net6.0 Target Framework Moniker (TFM) gives you access to all cross-platform APIs provided by .NET. This is the best choice if you’re writing a console application, ASP.NET Core application, or a reusable cross-platform library.

If you’re targeting a specific OS (such as writing a Windows Forms or iOS application), there’s another set of TFMs (each targeting the self-evident OS) at your disposal. They give you access to all of net6.0’s APIs plus a bunch of OS-specific APIs.

  • net6.0-android
  • net6.0-ios
  • net6.0-maccatalyst
  • net6.0-tvos
  • net6.0-windows

Each unversioned TFM is equivalent to the minimum supported operating system version for .NET 6. You can specify the OS version if you want to be specific or to access newer APIs.

Both net6.0 and net6.0-windows TFMs are supported (same as .NET 5). Android and Apple TFM is new to .NET 6 and is currently in preview. They will be supported in a later .NET 6 update.

There is no compatibility relationship between OS-specific TFMs. For example, net6.0-ios is not compatible with net6.0-tvos. If you want to share code, you need to do it using source code with #if statements or binaries with net6.0 object code.

performance

Since we started the .NET Core project, the team has been constantly focusing on performance. Stephen Toub does a great job of documenting .NET performance progress with each release. Welcome to the performance improvements in .NET 6 post. In this article, it covers the major performance improvements you want to know about, including file IO, interface conversion, PGO, and System.Text.Json.

Dynamic PGO

Dynamic profile-guided optimization (PGO) can significantly improve steady-state performance. For example, PGO increased requests per second by 26% (510K -> 640K) for the TechEmpower JSON “MVC” suite.

Dynamic PGO is based on tiered compilation, which enables methods to be first compiled very quickly (called “tier 0”) to improve startup performance, and subsequently recompiled with extensive optimizations enabled (called “tier 0″). 1 tier”) once the approach is proven to be impactful. This model enables methods to be instrumented in layer 0 to allow various observations of the code’s execution. When these methods are retuned at layer 1, information gathered from layer 0 execution is used to better optimize layer 1 code. That’s the nature of the mechanics.

The startup time of dynamic PGO will be slightly slower than the default runtime because of the extra code that is run in layer 0 methods to observe the method behavior.

To enable dynamic PGO, set DOTNET_TieredPGO=1 in the environment where the application will run. You also have to make sure tiered compilation is enabled (by default). Dynamic PGO is optional as it is a new and influential technique. We want to release opt-in usage and related feedback to make sure it’s fully stress-tested. We did the same for tiered compilation. At least one very large Microsoft service supports and already uses dynamic PGO in production. We encourage you to give it a try.

You can read more about the benefits of dynamic PGO in the Performance in .NET 6 post, including the following microbenchmark, which measures the cost of a specific LINQ enumerator.

private IEnumerator<long> _source = Enumerable.Range(0, long.MaxValue).GetEnumerator();
[Benchmark]
public void MoveNext() => _source.MoveNext();

This is the result with and without dynamic PGO.

.NET 6 History Super Complete Raiders

That’s a pretty big difference, but there’s also an increase in code size, which may surprise some readers. This is the size of the assembly code generated by the JIT, not memory allocation (which is a more common focus). The .NET 6 performance post has a good explanation of this.

A common optimization in PGO implementations is “hot/cold splitting”, where frequently executed method parts (“hot”) are moved closer together at method start, and infrequently executed method parts (“cold”) are moved to the end of the method. This allows for better use of the instruction cache and minimizes potentially unused code loads.

For context, interface dispatch is the most expensive type of call in .NET. Non-virtual method calls are the fastest, and even faster are calls that can be eliminated by inlining. In this case, dynamic PGO provides two (alternative) call sites for MoveNext. The first – hot – is a direct call to Enumerable+RangeIterator.MoveNext, the other – cold – is via IEnumerator<int> The virtual interface call. It would be a huge win if the hottest guys were called most of the time.

This is magic. When JIT instruments this method’s level 0 code, it includes instrumenting this interface dispatch to track the concrete type of _source on each call. The JIT finds that each call is on a type called Enumerable+RangeIterator, which is a private class that implements Enumerable.Range inside the Enumerable implementation. So, for layer 1, the JIT has issued a check to see if _source is of type Enumerable+RangeIterator: if not, jump to the cool part we highlighted earlier that does normal interface dispatch. But if it is – which is expected to be the vast majority of the time based on profiling data – then it can go ahead and call the non-virtualized Enumerable+RangeIterator.MoveNext method directly. Not only that, but it also considers it profitable to inline the MoveNext method. The net effect is that the generated assembly code is a bit larger, but optimized for the exact scenarios expected to be the most common. These were the kind of wins we wanted when we started building Dynamic PGO.

Dynamic PGO will be discussed again in the RyuJIT section.

FileIO improvements

FileStream was almost completely rewritten in .NET 6 with a focus on improving asynchronous file IO performance. On Windows, the implementation no longer uses a blocking API and can be several times faster! We’ve also improved memory usage on all platforms. We’ve made asynchronous operations allocation-free after the first asynchronous operation (which usually allocates)! Additionally, we have unified the behavior of Windows and Unix implementations for different edge cases (which is possible).

The performance improvements of this rewrite benefit all operating systems. The benefit is greatest for Windows, which is so far behind. macOS and Linux users should also see significant FileStream performance improvements.

The following benchmarks write 100 MB to a new file.

private byte[] _bytes = new byte[8_000]; [Benchmark] public async Task Write100MBAsync() {     using FileStream fs = new("file.txt", FileMode.Create, FileAccess.Write, FileShare.None, 1, FileOptions.Asynchronous);     for (int i = 0; i < 100_000_000 / 8_000; i++)         await fs.WriteAsync(_bytes); }

On Windows with SSD drives, we observed a 4x speedup and over 1200x allocation drop:
.NET 6 History Super Complete Raiders

We also recognized the need for higher performance file IO features: concurrent reads and writes, and scatter/gather IO. For these cases, we have introduced new APIs for the System.IO.File and System.IO.RandomAccess classes.

async Task AllOrNothingAsync(string path, IReadOnlyList<ReadOnlyMemory<byte>> buffers)
{
    using SafeFileHandle handle = File.OpenHandle(
        path, FileMode.Create, FileAccess.Write, FileShare.None, FileOptions.Asynchronous,
        preallocationSize: buffers.Sum(buffer => buffer.Length)); // hint for the OS to pre-allocate disk space

    await RandomAccess.WriteAsync(handle, buffers, fileOffset: 0); // on Linux it's translated to a single sys-call!
}

This example demonstrates:

  • Open a file handle using the new File.OpenHandle API.
  • Pre-allocate disk space with the new pre-allocate size feature.
  • Write to files using the new Scatter/Gather IO API.

The preallocated size feature improves performance because write operations do not need to expand the file and the file is less likely to become fragmented. This approach improves reliability because write operations will no longer fail for out of space, since the space is already reserved. The Scatter/Gather IO API reduces the number of system calls required to write data.

Faster interface checking and conversion

Interface casting performance increased by 16% – 38%. This improvement is especially useful for pattern matching between C# and interfaces.

.NET 6 History Super Complete Raiders 

This graph shows the size of the improvement on a representative benchmark.

One of the biggest advantages of migrating parts of the .NET runtime from C++ to managed C# is that it lowers the barrier to contribution. This includes interface conversion, which was moved to C# as an early .NET 6 change. More people in the .NET ecosystem know C# than C++ (and the runtime uses challenging C++ patterns). Just being able to read some of the code that makes up the runtime is an important step in developing the confidence to contribute in various forms.

Credit to Ben Adams.

System.Text.Json source generator

We added a source code generator for System.Text.Json that avoids the need for reflection and code generation at runtime and generates optimal serialization code at build time. Serializers are usually written using very conservative techniques because they have to be. However, if you read your own serialization source code (using a serializer), you can see what the obvious choice should be to make the serializer more optimal in your particular case. That’s exactly what this new source generator does. In addition to improving performance and reducing memory, the source code generator generates code that is optimal for assembly trimming. This helps to make smaller applications.

Serializing POCOs is a very common scenario. Using the new source code generator, we observe serialization speeds up to 1.6x faster than our baseline.
.NET 6 History Super Complete Raiders

The TechEmpower cache benchmarking platform or framework performs an in-memory cache of information from the database. The benchmark’s .NET implementation performs JSON serialization of cached data to send it as a response to the test harness.

.NET 6 History Super Complete Raiders

We observed ~100K RPS gain (~40% increase). When combined with MemoryCache performance improvements, .NET 6 delivers 50% higher throughput than .NET 5!

C# 10

Welcome to C# 10. A major theme of C# 10 is to continue the simplification journey that started with top-level statements in C# 9. The new function removes more rituals from Program.cs, resulting in a one-liner program. Their inspiration comes from talking to people with no C# experience (students, professional developers, and others) and learning what works and is most intuitive for them.

Most .NET SDK templates have been updated to provide a simpler, cleaner experience that can now be implemented with C# 10. We have received feedback that some people don’t like the new templates because they are not suitable for experts, remove object orientation, remove important concepts learned on the first day of writing C#, or encourage writing the entire program in one file. Objectively speaking, neither of these views is correct. The new model is equally applicable to students as professional developers. However, it differs from the pre-.NET 6 C-derived model.

There are several other features and improvements in C# 10, including record structures.

global use directive

The global using directive lets you specify a using directive only once and apply it to every file you compile.

The following example shows the breadth of the syntax:

  • global using System;
  • global using static System.Console;
  • global using Env = System.Environment;

You can put global using statements in any .cs file, including in Program.cs.

Implicit usings is an MSBuild concept that automatically adds a set of directives depending on the SDK. For example, a console application implicitly uses a different method than ASP.NET Core.

Implicit use is optional and enabled in a PropertyGroup:

<ImplicitUsings>enable</ImplicitUsings>

Implicit use is optional for existing projects, but is included by default in new C# projects. See Implicit usage for details.

file scope namespace

File-wide namespaces enable you to declare a namespace for an entire file without nesting the rest in { … }. Only one is allowed, and it must appear before any types are declared.

The new syntax is a single line:

namespace MyNamespace;
class MyClass { … } // Not indented
This new syntax is an alternative to the three-line indentation style:

namespace MyNamespace
{
    class MyClass { … } // Everything is indented
}

The benefit is reduced indentation in the extremely common case where the entire file is in the same namespace.

record structure

C# 9 introduces records as a special form of value-oriented classes. In C# 10, you can also declare struct records. Structs in C# already have value equality, but record structs add the == operator and IEquatable<T> implementation, and a value-based implementation of ToString:

public record struct Person {     public string FirstName { get; init; }     public string LastName { get; init; } }
Just like record classes, record structures can be “positional”, meaning they have a primary constructor that implicitly declares public members corresponding to the parameters:

public record struct Person(string FirstName, string LastName);

However, unlike record classes, implicit public members are mutable auto-implemented properties. In this way, the record structure becomes a natural growth story for tuples. For example, if you have a return type (string FirstName, string LastName) and you want to extend it to a named type, you can easily declare the corresponding positional structure records and maintain mutable semantics.

If you want an immutable record with readonly properties, you can declare the entire record structure readonly (just like you can other structures):

public readonly record struct Person(string FirstName, string LastName);
C# 10 not only supports record structures, but also supports all structures and anonymous types with expressions:

var updatedPerson = person with { FirstName = “Mary” };

F# 6

F# 6 aims to make F# simpler and more efficient. This applies to language design, libraries and tools. Our goal for F# 6 (and beyond) is to remove edge cases in the language that surprise users or hinder learning F#. We’re excited to partner with the F# community on this ongoing effort.

Making F# Faster and More Interoperable

The new syntax task {…} directly creates a task and starts it. This is one of the most important features in F# 6, making asynchronous tasks simpler, more performant, and more interoperable with C# and other .NET languages. Previously, creating a .NET task required using async {…} to create the task and calling Async.StartImmediateAsTask.

The function task {…} is based on what is known as “Recoverable Code” RFC FS-1087. Resumable code is a core feature that we hope to use in the future to build other high-performance asynchronous and yielding state machines.

F# 6 also adds other performance features for library authors, including InlineIfLambda and an unboxed representation of the F# activity pattern. A particularly notable performance improvement has been in the compilation of list and array expressions, which are now 4x faster, and are better and easier to debug.

Make F# easier to learn and more unified

F# 6 enables the expr[idx] indexing syntax. So far, F# has used expr.[idx] for indexing. The removal of the dot notation is based on repeated feedback from first-time F# users that the use of dots differs unnecessarily from their expected standard practice. In new code, we recommend systematically using the new expr[idx] indexing syntax. As a community, we should all switch to this syntax.

The F# community has made important improvements to make the F# language more unified in F# 6. The most important of these is the removal of some inconsistencies and limitations in F#’s indentation rules. Other design additions to make F# more uniform include adding the as pattern; allowing “overloading custom operations” in computed expressions (useful for DSLs); allowing _ to drop use bindings and allowing %B to do binary formatting in output. The F# core library adds new functions for copying and updating lists, arrays, and sequences, as well as other NativePtr intrinsics. Some old features of F# that were deprecated since 2.0 now cause errors. Many of these changes better align F# with your expectations, reducing surprises.

F# 6 also adds support for other “implicit” and “type-directed” conversions in F#. This means fewer explicit upcasts and adds first-class support for .NET-style implicit conversions. F# has also been tweaked to better fit the era of numeric libraries using 64-bit integers, with implicit extensions to 32-bit integers.

Improved F# tools

Tooling improvements in F# 6 make everyday coding easier. The new “Pipeline Debugging” allows you to step through, set breakpoints and inspect intermediate values ​​of the F# pipeline syntax input |> f1 |> f2 . The debug display of shadow values ​​has been improved, removing a common source of confusion when debugging. F# tooling is also now more efficient, with the F# compiler performing the parsing phase in parallel. The F# IDE tools have also been improved. F# scripting is now more robust, allowing you to pin the .NET SDK version used via a global.json file.

hot reload

Hot Reload is another performance feature focused on developer productivity. It enables you to make various code edits to a running application, reducing the time you need to wait for the application to rebuild, restart, or re-navigate to where you were after making a code change.

Hot Reload is available through the dotnet watch CLI tool and Visual Studio 2022. You can use Hot Reload with a variety of app types such as ASP.NET Core, Blazor, .NET MAUI, Console, Windows Forms (WinForms), WPF, WinUI 3, Azure Functions, and more.

When using the CLI, simply start your .NET 6 application with dotnet watch, make any supported edits, and when you save the file (like in Visual Studio Code), those changes will be applied immediately. If a change is not supported, details will be logged to the command window.

.NET 6 History Super Complete Raiders 

This image shows an edit using dotnet watch. I made edits to the .cs file and .cshtml file (as noted in the log), both were applied to the code and reflected in the view very quickly in less than half a second device.

When using Visual Studio 2022, simply start your application, make supported changes, and apply those changes using the new Hot Reload button (pictured below). You can also choose to apply changes on save via the drop-down menu on the same button. When using Visual Studio 2022, hot reload is available for multiple .NET versions, for .NET 5+, .NET Core, and .NET Framework. For example, you will be able to make code-behind changes to the button’s OnClickEvent handler. The application’s Main method does not support it.

.NET 6 History Super Complete Raiders

NOTE: There is a bug in RuntimeInformation.FrameworkDescription which is showcased in this image and will be fixed soon.

Hot Reload also works with the existing Edit and Continue functionality (when stopped at a breakpoint) and XAML Hot Reload for editing application UI in real time. C# and Visual Basic applications are currently supported (not F#).

Safety

Security has been significantly improved in .NET 6. It’s always a focus for the team, including threat modeling, encryption, and defense-in-depth defenses.

On Linux, we rely on OpenSSL for all cryptographic operations, including TLS (required for HTTPS). On macOS and Windows, we rely on functionality provided by the operating system to achieve the same. With each new version of .NET, we often need to add support for a new version of OpenSSL. .NET 6 adds support for OpenSSL 3.

The biggest changes in OpenSSL 3 are the improved FIPS 140-2 module and simpler licensing.

.NET 6 requires OpenSSL 1.1 or higher, and will prefer the highest installed version of OpenSSL it can find, up to and including v3. In general, you most likely started using OpenSSL 3 when your Linux distribution switched to OpenSSL 3 by default. Most distros don’t do this yet. For example, if you install .NET 6 on Red Hat 8 or Ubuntu 20.04, you won’t (at the time of this writing) start using OpenSSL 3.

OpenSSL 3, Windows 10 21H1, and Windows Server 2022 all support ChaCha20Poly1305. You can use this new authenticated encryption scheme in .NET 6 (assuming your environment supports it).

Thanks to Kevin Jones for the Linux support of ChaCha20Poly1305.

We also published a new runtime security mitigations roadmap. It is important that the runtime you use is not subject to textbook attack types. We are meeting that need. In .NET 6, we built initial implementations of W^X and Intel’s Control Flow Enforcement Technology (CET). W^X is fully supported, enabled by default for macOS Arm64, and can be optionally added for other environments. CET is opt-in and preview for all environments. We want to enable both technologies by default in all environments in .NET 7.

Arm64

There’s a lot of excitement around Arm64 these days for laptops, cloud hardware, and other devices. We’re equally excited about the .NET team and are doing our best to keep up with this industry trend. We work directly with engineers at Arm Holdings, Apple, and Microsoft to ensure our implementation is correct and optimized, and that our plans are aligned. These close partnerships help us a lot.

  • Special thanks to Apple for sending our team a bushel of Arm64 development kits to use prior to the release of the M1 chip and for providing significant technical support.
  • Special thanks to Arm Holdings, whose engineers did a code review of our Arm64 changes and made performance improvements.

Prior to this, we added initial support for Arm64 via .NET Core 3.0 and Arm32. The team has made significant investments in Arm64 over the last few releases, and this will continue for the foreseeable future. In .NET 6, our main focus is to support the new Apple Silicon and x64 emulation scenarios on macOS and Windows Arm64 operating systems.

You can install Arm64 and x64 versions of .NET on macOS 11+ and Windows 11+ Arm64 operating systems. We had to make multiple design choices and product changes to make it work.

Our strategy is “pro-native architecture”. We recommend that you always use the SDK that matches the native architecture, the Arm64 SDK on macOS and Windows Arm64. SDK is a large amount of software. Running natively on an Arm64 chip will perform much better than emulation. We’ve updated the CLI to simplify things. We will never focus on optimizing emulation x64.

By default, if you dotnet run a .NET 6 application with the Arm64 SDK, it will run as Arm64. You can easily switch to running as x64 with an argument, eg -adotnet run -a x64. The same arguments apply to other CLI verbs. For more information, see .NET 6 RC2 Updates for macOS and Windows Arm64.

I want to make sure to cover one of the subtleties. When you use -a x64, the SDK still runs natively as Arm64. There are fixed points of process boundaries in the .NET SDK architecture. In most cases, a process must be all Arm64 or all x64. I’m simplifying a bit, but the .NET CLI waits for the last process in the SDK architecture to be created, then launches it as the chip architecture you requested (eg x64). This is how your code works. That way, as a developer, you get the benefits of Arm64, but your code runs where it needs to. This is only relevant if you need to run some code as x64. If you don’t, you can always run everything in Arm64, which is great.

Arm64 support

For macOS and Windows Arm64, here’s what you need to know:

  • The .NET 6 Arm64 and x64 SDKs are supported and recommended.
  • All supported Arm64 and x64 runtimes are supported.
  • .NET Core 3.1 and .NET 5 SDKs work, but offer less functionality and in some cases are not fully supported.
  • dotnet test does not yet work correctly with x64 emulation. We are working on it. dotnet test will be improved as part of the 6.0.200 release, and possibly earlier.

See .NET Support for macOS and Windows Arm64 for more complete information.

Linux is missing from this discussion. It does not support x64 emulation like macOS and Windows do. Therefore, these new CLI features and support methods are not directly applicable to Linux, nor are they required by Linux.

Windows Arm64

We have a simple tool to demonstrate the environment in which .NET runs.

C:Usersrich>dotnet tool install -g dotnet-runtimeinfo
You can invoke the tool using the following command: dotnet-runtimeinfo
Tool 'dotnet-runtimeinfo' (version '1.0.5') was successfully installed.

C:Usersrich>dotnet runtimeinfo
         42
         42              ,d                             ,d
         42              42                             42
 ,adPPYb,42  ,adPPYba, MM42MMM 8b,dPPYba,   ,adPPYba, MM42MMM
a8"    `Y42 a8"     "8a  42    42P'   `"8a a8P_____42   42
8b       42 8b       d8  42    42       42 8PP"""""""   42
"8a,   ,d42 "8a,   ,a8"  42,   42       42 "8b,   ,aa   42,
 `"8bbdP"Y8  `"YbbdP"'   "Y428 42       42  `"Ybbd8"'   "Y428

As you can see, the tool runs natively on Windows Arm64. I’ll show you what ASP.NET Core looks like.

.NET 6 History Super Complete Raiders

macOS Arm64

You can see that the experience is similar on macOS Arm64, and also shows the architectural targets.

[email protected] app % dotnet --version
6.0.100
[email protected] app % dotnet --info | grep RID
 RID:         osx-arm64
[email protected] app % cat Program.cs 
using System.Runtime.InteropServices;
using static System.Console;

WriteLine($"Hello, {RuntimeInformation.OSArchitecture} from {RuntimeInformation.FrameworkDescription}!");
[email protected] app % dotnet run
Hello, Arm64 from .NET 6.0.0-rtm.21522.10!
[email protected] app % dotnet run -a x64
Hello, X64 from .NET 6.0.0-rtm.21522.10!
[email protected] app % 

This image shows that Arm64 execution is the default for the Arm64 SDK, and how easy it is to switch between targeting Arm64 and x64 with the -a parameter. The exact same experience works on Windows Arm64.

.NET 6 History Super Complete Raiders
  
This image demonstrates the same, but using ASP.NET Core. I’m using the same .NET 6 Arm64 SDK you see in the image above.

Docker on Arm64

Docker supports containers running in native architecture and emulation, native architecture is the default. This may seem obvious, but it can be confusing when most of the Docker Hub directories are x64 oriented. You can use –platform linux/amd64 to request x64 images.

We only support running Linux Arm64 .NET container images on Arm64 operating systems. This is because we never supported running .NET in QEMU, which is what Docker uses for architecture emulation. It appears this may be due to a limitation of QEMU.

.NET 6 History Super Complete Raiders  

This image demonstrates a console sample we maintain: mcr.microsoft.com/dotnet/samples. It’s an interesting sample because it contains some basic logic for printing the CPU and memory limit information you can use. The images I show set CPU and memory limits.

Try it yourself: docker run –rm mcr.microsoft.com/dotnet/samples

Arm64 performance

The Apple Silicon and x64 emulation support projects are very important, however, we have also generally improved Arm64 performance.

 .NET 6 History Super Complete Raiders 

This image demonstrates an improvement to zeroing out the contents of a stack frame, which is a common operation. The green line is the new behavior, while the orange line is another (less beneficial) experiment, both have improved relative to the baseline, denoted by the blue line. For this test, lower is better.

container

.NET 6 is better for containers, mostly based on all the improvements discussed in this article, for both Arm64 and x64. We’ve also made key changes that help in various scenarios. Validating Container Improvements with .NET 6 demonstrates that some of these improvements are being tested together.

Windows container improvements and new environment variables are also included in the November .NET Framework 4.8 Containers Update, released tomorrow, November 9th.

Release notes can be found in our docker repository:

.NET 6 Containers Release Notes
.NET Framework 4.8 November 2021 Containers Release Notes

Windows container

.NET 6 adds support for Windows Process Isolation Containers. If you’re using Windows containers in Azure Kubernetes Service (AKS), you’re relying on process-isolated containers. Process-isolated containers can be thought of as very similar to Linux containers. Linux containers use cgroups, and Windows process-isolated containers use Job Objects. Windows also offers Hyper-V Containers, which provide greater isolation through stronger virtualization. There are no changes in .NET 6 for Hyper-V Containers.

The main value of this change is that Environment.ProcessorCount will now report the correct value using Windows Process Isolation Containers. If you create a 2-core container on a 64-core machine, Environment.ProcessorCount will return 2. In previous versions, this property would report the total number of processors on the machine, which is the same as specified by Docker CLI, Kubernetes, or other container orchestrators/runtimes. Restrictions are irrelevant. This value is used by various parts of .NET for extension purposes, including the .NET garbage collector (though it relies on related lower-level APIs). Community libraries also rely on this API for extensions.

We recently validated this new functionality with customers on Windows containers in production using a large number of pods on AKS. They were able to run successfully with 50% memory (compared to their typical configuration), which was previously causing an OutOfMemoryException level StackOverflowException. They didn’t take the time to find the minimum memory configuration, but we’re guessing it’s significantly less than 50% of their typical memory configuration. As a result of this change, they will save money by moving to a cheaper Azure configuration. Just an upgrade, which is a nice, easy win.

Optimize zoom

We’ve heard from users that some applications don’t scale optimally when Environment.ProcessorCount reports the correct value. If that sounds contrary to what you’ve just read about Windows containers, it kind of is. .NET 6 now provides the DOTNET_PROCESSOR_COUNT environment variable to manually control the value of Environment.ProcessorCount. In a typical use case, an application might be configured with 4 cores on a 64 core machine and scale best with 8 or 16 cores. This environment variable can be used to enable that scaling.

This model might seem odd, where the Environment.ProcessorCount and –cpus (via Docker CLI) values ​​might be different. By default, container runtimes target core equivalents, not actual cores. This means that when you say you want 4 cores, you get CPU time comparable to 4 cores, but your application may (theoretically) run on more cores, or even run on 64 for short periods of time. Core machines run on all 64 cores. This may allow your application to scale better on more than 4 threads (continuing the example), and it may be beneficial to allocate more. This assumes that thread allocation is based on the value of Environment.ProcessorCount. If you choose to set a higher value, your application may use more memory. For some workloads, this is a simple tradeoff. At least, this is a new option you can test.

Both Linux and Windows containers support this new feature.

Docker also provides a CPU group feature where your application can be associated with specific cores. It is not recommended to use this feature in this case because the number of cores an application can access is defined concretely. We’ve also seen some issues with using it with Hyper-V containers, and it doesn’t really work with that isolation mode.

Debian 11 “bullseye”

We closely monitor the life cycle and release schedule of Linux distributions and try to make the best choice on your behalf. Debian is the Linux distribution we use for our default Linux images. If you pull the 6.0 tag from one of our container repositories, you’ll pull a Debian image (assuming you’re using Linux containers). With each new .NET release, we consider whether a new Debian release should be adopted.

As a policy, we do not change Debian versions for labeling convenience, eg 6.0, mid-release. If we did, some apps would definitely crash. This means that it is very important to choose the Debian version at the beginning of the release. Also, these images get used a lot, mostly because they are references to “good tags”.

Debian and .NET releases are naturally not planned together. When we started .NET 6, we saw that Debian “bullseye” might be released in 2021. We decided to bet on Bullseye right from launch. We started shipping bullseye based container images with .NET 6 Preview 1 and decided not to look back. The bet is that the .NET 6 version loses the competition to the bullseye version. As of August 8th, we still don’t know when the Bullseye will ship, and we’re three months away from our own release, on November 8th. We don’t want to release production .NET 6 on preview Linux, but we’re sticking to our plans to lose this race late.

We were pleasantly surprised when Debian 11 “bullseye” was released on August 14th. We lost the game but won the bet. This means that .NET 6 users get the best and latest Debian from day one by default. We believe Debian 11 and .NET 6 will be an excellent combination for many users. Sorry Busters, we hit the bullseye.

Newer distributions include newer major versions of various packages in their package feeds, and often get CVE fixes faster. This is in addition to newer kernels. New releases serve users better.

Looking further ahead, we will soon start planning support for Ubuntu 22.04. Ubuntu is another distribution in the Debian family that is very popular with .NET developers. We want to provide same-day support for new Ubuntu LTS releases.

Hats off to Tianon Gravi for maintaining the Debian images for the community and helping us when we have problems.

Dotnet Monitor

dotnet monitor is an important diagnostic tool for containers. It has been available as a sidecar container image for some time, but is in an unsupported “experimental” state. As part of .NET 6, we are releasing a .NET 6 based dotnet monitor image that is fully supported in production.

dotnet monitor has been used by Azure App Service as an implementation detail for its ASP.NET Core Linux diagnostic experience. This is one of the expected scenarios, built on top of dotnet monitor to provide a higher level and higher value experience.

You can now pull new images:

docker pull mcr.microsoft.com/dotnet/monitor:6.0

dotnet monitor makes it easier to access diagnostic information (logs, traces, process dumps) from .NET processes. It’s easy to access all the diagnostic information you need on your desktop, however, these familiar techniques may not work in a production environment using containers. dotnet monitor provides a unified way to collect these diagnostic artifacts, whether running on your desktop machine or in a Kubernetes cluster. There are two different mechanisms for collecting these diagnostic artifacts:

  • HTTP API for ad-hoc collection of artifacts. You can call these API endpoints when you already know that your application is experiencing a problem and you are interested in gathering more information.
  • Rule-based configuration triggers for always online collection of artifacts. You can configure rules to collect diagnostic data when desired conditions are met, for example, to collect process dumps when you have sustained high CPU.

dotnet monitor provides a common diagnostic API for .NET applications that can work anywhere with any tool. A “common API” is not a .NET API, but a Web API that you can call and query. dotnet monitor includes an ASP.NET web server that directly interacts with and exposes data from the diagnostic server in the .NET runtime. The design of dotnet monitor enables high-performance monitoring in production and secure use to control access to privileged information. dotnet monitor interacts with the runtime through non-internet addressable unix domain sockets – across container boundaries. The model communication model is well suited for this use case.

Structured JSON log

The JSON formatter is now the default console logger in aspnet.NET 6 container images. The default in .NET 5 is a simple console formatter. This change was made to enable the default configuration to be used with automation tools that rely on machine-readable formats such as JSON.

The output of the image now looks like this

aspnet:
$ docker run --rm -it -p 8000:80 mcr.microsoft.com/dotnet/samples:aspnetapp
{"EventId":60,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository","Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","State":{"Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","path":"/root/.aspnet/DataProtection-Keys","{OriginalFormat}":"Storing keys in a directory u0027{path}u0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed."}}
{"EventId":35,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager","Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form.","State":{"Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form.","KeyId":"86cafacf-ab57-434a-b09c-66a929ae4fd7","{OriginalFormat}":"No XML encryptor configured. Key {KeyId:B} may be persisted to storage in unencrypted form."}}
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: http://[::]:80","State":{"Message":"Now listening on: http://[::]:80","address":"http://[::]:80","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrlu002BC to shut down.","State":{"Message":"Application started. Press Ctrlu002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrlu002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Production","State":{"Message":"Hosting environment: Production","envName":"Production","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app","State":{"Message":"Content root path: /app","contentRoot":"/app","{OriginalFormat}":"Content root path: {contentRoot}"}}

Logging__Console__FormatterName can change the logger format type by setting or unsetting an environment variable or by code changes (see Console Log Format for more details).

After the change, you’ll see output like this (just like .NET 5):

$ docker run --rm -it -p 8000:80 -e Logging__Console__FormatterName="" mcr.microsoft.com/dotnet/samples:aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
      No XML encryptor configured. Key {8d4ddd1d-ccfc-4898-9fe1-3e7403bf23a0} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

NOTE: This change does not affect .NET SDKs on developer machines, such as dotnet run. This change is specific to aspnet container images.

Support for OpenTelemetry metrics

As part of our focus on observability, we’ve been adding support for OpenTelemetry for the last few .NET releases. In .NET 6, we added support for the OpenTelemetry Metrics API. By adding support for OpenTelemetry, your applications can seamlessly interoperate with other OpenTelemetry systems.

System.Diagnostics.Metrics is the .NET implementation of the OpenTelemetry Metrics API specification. The Metrics API is specifically designed to work with raw metrics in order to efficiently and concurrently generate continuous summaries of those measurements.

The API includes Meter classes that can be used to create instrument objects. The API exposes four tool classes: Counter, Histogram, ObservableCounter, and ObservableGauge to support different measurement schemes. Additionally, the API exposes the MeterListener class to allow listening to measurements recorded by instruments for aggregation and grouping purposes.

The OpenTelemetry .NET implementation will be extended to use these new APIs, which add support for Metrics observability scenarios.

Sample Library Measurement Record

     Meter meter = new Meter("io.opentelemetry.contrib.mongodb", "v1.0");
    Counter<int> counter = meter.CreateCounter<int>("Requests");
    counter.Add(1);
    counter.Add(1, KeyValuePair.Create<string, object>("request", "read"));

listening example

  MeterListener listener = new MeterListener();
    listener.InstrumentPublished = (instrument, meterListener) =>
    {
        if (instrument.Name == "Requests" && instrument.Meter.Name == "io.opentelemetry.contrib.mongodb")
        {
            meterListener.EnableMeasurementEvents(instrument, null);
        }
    };
    listener.SetMeasurementEventCallback<int>((instrument, measurement, tags, state) =>
    {
        Console.WriteLine($"Instrument: {instrument.Name} has recorded the measurement {measurement}");
    });
    listener.Start();

Windows Forms

We continue to make important improvements in Windows Forms. .NET 6 includes better control accessibility, the ability to set an application-wide default font, template updates, and more.

Accessibility Improvements

In this release, we’ve added UIA providers for CheckedListBox, LinkLabel, Panel, ScrollBar, and TabControlTrackBar, which enable tools like Narrator and test automation to interact with elements of your application.

default font

You can now use .Application.SetDefaultFont

void Application.SetDefaultFont(Font font)

minimal application

Here’s a minimal Windows Forms application with .NET 6:

class Program
{
    [STAThread]
    static void Main()
    {
        ApplicationConfiguration.Initialize();
        Application.Run(new Form1());
    }
}

As part of the .NET 6 release, we’ve been updating most templates to be more modern and minimal, including Windows Forms. We decided to make the Windows Forms template a bit more traditional, partly because of the need to apply the [STAThread] attribute to the application entry point. However, there’s more drama than immediately present.

ApplicationConfiguration.Initialize() is a source generation API that makes the following calls behind the scenes:

Application.EnableVisualStyles();

Application.SetCompatibleTextRenderingDefault(false);

Application.SetDefaultFont(new Font(…));

Application.SetHighDpiMode(HighDpiMode.SystemAware);

The parameters for these calls are configurable via MSBuild properties in the csproj or props file.

The Windows Forms designer in Visual Studio 2022 is also aware of these properties (currently it just reads the default font) and can show you your application as it would at runtime:

.NET 6 History Super Complete Raiders

template update

C#’s Windows Forms templates have been updated to support the new application bootstrap, global using directives, file-scope namespaces, and nullable reference types.

More runtime designers

Now you can build generic designers (for example, report designers), because .NET 6 has all the pieces that designers and designer-related infrastructure are missing. See this blog post for details.

single file application

In .NET 6, in-memory single-file applications have been enabled for Windows and macOS. In .NET 5, this type of deployment is limited to Linux. You can now publish single-file binaries that deploy and start as a single file for all supported operating systems. Single-file apps no longer extract any core runtime assemblies to a temporary directory.

This scaling capability is based on building blocks called “superhosts”. An “apphost” is an executable that launches an application in non-single-file cases, such as myapp.exe or ./myapp. Apphost contains code to find a runtime, load it, and use that runtime to launch your application. Superhost still performs some of these tasks, but uses statically linked copies of all CoreCLR native binaries. Static linking is what we use to achieve a single file experience. Native dependencies, such as those shipped with NuGet packages, are the notable exception to single-file embedding. By default, they are not included in a single file. For example, the WPF native dependencies are not part of the superhost and thus spawn additional files outside of single-file applications. You can embed and extract native dependencies using the setting IncludeNativeLibrariesForSelfExtract.

static analysis

We improved the single-file analyzer to allow custom warnings. If your API doesn’t work in a single-file publish, you can now mark it with the [RequiresAssemblyFiles] attribute, with a warning if the analyzer is enabled. Adding this attribute will also silence all warnings related to individual files in the method, so you can use this to propagate warnings up to your public API.

Single File Analyzer is automatically enabled for exe projects when PublishSingleFile is set to true, but you can also enable it for any project by setting EnableSingleFileAnalysis to true. This is helpful if you want to support libraries as part of a single file application.

In .NET 5, we added warnings for Assembly.Location and some other APIs that behave differently in single-file packages.

compression

Single-file packages now support compression by setting the property EnableCompressionInSingleFile to true. At runtime, files are decompressed into memory as needed. Compression can save a lot of space in some scenarios.

Let’s look at a single file publish (with and without compression) used with the NuGet Package Explorer.

Uncompressed: 172 MB
.NET 6 History Super Complete Raiders
  

Compressed: 71.6 MB
.NET 6 History Super Complete Raiders
  

Compression can significantly increase application startup time, especially on Unix platforms. Unix platforms have a copyless faststart path that cannot be used for compression. You should test your application with compression enabled to see if the additional startup cost is acceptable.

single file debugging

Single-file applications can currently only be debugged using a platform debugger such as WinDBG. We are considering adding Visual Studio Debugging with later versions of Visual Studio 2022.

Single-file signing on macOS

Single-file applications now meet Apple’s notarization and signing requirements on macOS. The specific change has to do with the way we build single-file applications based on a discrete file layout.

Apple started implementing new signing and notarization requirements for macOS Catalina. We’ve been working closely with Apple to understand the requirements and find solutions to enable development platforms like .NET to work in that environment. We’ve made product changes and documented user workflows to meet Apple’s requirements in the last few .NET releases. One of the remaining gaps is single-file signing, which is a requirement for distributing .NET applications on macOS, including in the macOS Store.

IL trimming

The team has been working on IL trimming for several releases. .NET 6 represents a major step forward on this journey. We’ve been working hard to make the more aggressive trim mode safe and predictable, so feel confident making it the default. TrimMode=link was previously optional and is now the default.

We have a three pronged pruning strategy:

  • Increased pruning capabilities of the platform.
  • Annotate the platform to provide better warnings and enable others to do the same.
  • On top of that, make the default pruning mode more aggressive in order to keep the app smaller.

Pruning has been in preview until now due to unreliable results for applications using unannotated reflections. With pruning warnings in place, the experience should now be predictable. Applications without pruning warnings should prune correctly and observe no change in behavior at runtime. Currently, only the core .NET libraries have fully annotated pruning, but we would like to see ecosystem annotation pruning and compatible pruning

Reduce application size

Let’s take a look at this pruning improvement using crossgen, one of the SDK tools. It can be pruned with several pruning warnings, which the crossgen team was able to resolve.

First, let’s look at distributing crossgen as a standalone application without pruning. It’s 80 MB (including the .NET runtime and all libraries).

.NET 6 History Super Complete Raiders

Then we can try the (now legacy) .NET 5 default trim mode, copyused. The result is down to 55 MB.

.NET 6 History Super Complete Raiders

The new .NET 6 default trim modelink further reduces the standalone file size to 36MB.
  
.NET 6 History Super Complete Raiders

We hope the new link pruning mode better aligns with pruning expectations: significant savings and predictable results.

Warnings are enabled by default

Prune warnings tell you where pruning might remove code used at runtime. These warnings were previously disabled by default because the warnings were very noisy, mostly due to the fact that the .NET platform did not participate in pruning as a first-class scenario.

We annotate most of the .NET libraries so that they produce accurate pruning warnings. So we feel it’s time to enable pruning warnings by default. The ASP.NET Core and Windows Desktop Runtime libraries have not been annotated. We plan to annotate ASP.NET service components next (after .NET 6). We’d like to see the community annotate the NuGet library once .NET 6 is released.

You can set<SuppressTrimAnalysisWarnings> true to disable warnings.

More information:

  • pruning warning
  • Intro to pruning
  • Prepare .NET libraries for pruning

Share with native AOT

We also implemented the same pruning warnings for the Native AOT experiment, which should improve the Native AOT compilation experience in much the same way.

math

We have significantly improved the math API. Some in the community are already enjoying these improvements.

performance-oriented API

A performance-oriented math API has been added to System.Math. Their implementation is hardware accelerated if the underlying hardware supports it.

New APIs:

  • SinCos is used to calculate Sin and Cos at the same time.
  • ReciprocalEstimate is used to calculate an approximation of 1/x.
  • ReciprocalSqrtEstimate is used to calculate an approximation of 1 / Sqrt(x).

New overload:

  • Clamp, DivRem, Min and Max support nint and nuint.
  • Abs and Sign support nint.
  • The DivRem variant returns a tuple.

Performance improvements:

ScaleB was ported to C# resulting in a 93% faster call. Thanks Alex Covington.
big integer performance

Improved parsing of BigIntegers from decimal and hexadecimal strings. We saw improvements as high as 89%, as shown in the graph below (lower is better).

Thanks Joseph Da Silva.

ComplexAPI is now annotated as readonly

Various APIs are now annotated with System.Numerics.Complexreadonly to ensure that no copy is made of readonly values ​​or values ​​passed in.

Credit to hrrrrustic.

BitConverter now supports floating point to unsigned integer bitcasting

BitConverter now supports DoubleToUInt64Bits, HalfToUInt16Bits, SingleToUInt32Bits, UInt16BitsToHalf, UInt32BitsToSingle, and UInt64BitsToDouble. This should make it easier to do floating point bit operations when needed.

Credit to Michal Petryka.

BitOperations supports additional functions

BitOperations now supports IsPow2, RoundUpToPowerOf2 and provides nint/nuint overloads of existing functions.

Thanks to John Kelly, Huo Yaoyuan and Robin Lindner.

Vector<T> , Vector2, Vector3, and Vector4 improvements

Vector<T> The primitive types nint and nuint primitive types added in C# 9 are now supported. For example, this change should make it easier to use SIMD instructions with pointer or platform-dependent length types.

Vector<T> A Sum method is now supported to simplify the need to compute the “horizontal sum” of all elements in a vector. Credit to Ivan Zratanov.

Vector<T> Now supports a generic method As<TFrom, TTo> to simplify working with vectors in a generic context where the concrete type is unknown. Thank you Huo Yaoyuan

Overload support Span<T> Added to Vector2, Vector3 and Vector4 to improve the experience when you need to load or store vector types.

Better parsing of standard number formats

We improved the parsers for standard numeric types, specifically .ToString and .TryFormatParse. They will now understand the requirement for precision > 99 decimal places and will provide accurate results for that many digits. Also, the parser now better supports trailing zeros in methods.

The following examples demonstrate before and after behavior.

  • 32.ToString(“C100”)->C132

    • .NET 6:$32.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    • .NET 5: We are artificially limited in formatting code to only handle precision <= 99. For precision >= 100, we interpret the input as a custom format instead.
  • 32.ToString(“H99”)-> Throw a FormatException

    • .NET 6: throws FormatException
    • This is correct behavior, but it is called here for comparison with the next example.
  • 32.ToString(“H100”)->H132

    • .NET 6: throws FormatException
    • .NET 5: H is an invalid format specifier. So, we should throw a FormatException. Instead, we interpret precision >= 100 as a custom format misbehavior that means we’re returning the wrong value.
  • double.Parse(“9007199254740997.0”)->9007199254740998

    • .NET 6 9007199254740996:。
    • .NET 5: 9007199254740997.0 cannot be fully represented in IEEE 754 format. With our current rounding scheme, the correct return value should be 9007199254740996. However, the last part of the input forces the parser to incorrectly round the result and return it. .09007199254740998

System.Text.Json

System.Text.Json provides various high-performance APIs for processing JSON documents. Over the past few releases, we’ve added new features to further improve JSON processing performance and ease the friction for those wishing to migrate from NewtonSoft.Json. This release includes a continuation on that path and a big step forward in terms of performance, especially in the serializer source generator.

JsonSerializer source generation

Note: Applications built using .NET 6 RC1 or earlier source code should be recompiled.

The backbone of almost all .NET serializers is reflection. Reflection is a great capability for some scenarios, but not as a basis for high-performance cloud-native applications that typically (de)serialize and process large JSON documents. Reflection is a matter of startup, memory usage, and assembly trimming.

An alternative to runtime reflection is compile-time source code generation. In .NET 6, we included a new source generator as System.Text.Json. The JSON source generator can be used with the JsonSerializer in a variety of ways and can be configured in a number of ways.

It can provide the following benefits:

  • Reduce startup time
  • Improve serialization throughput
  • Reduce private memory usage
  • Remove runtime use of System.Reflection and System.Reflection.Emit
  • IL Trimming Compatibility

By default, the JSON source generator emits serialization logic for a given serializable type. JsonSerializer provides higher performance than using existing methods by generating source code for direct use of Utf8JsonWriter. In short, source code generators provide a way to provide you with different implementations at compile time to make the runtime experience better.

Given a simple type:

namespace Test
{
    internal class JsonMessage
    {
        public string Message { get; set; }
    }
}

The source generator can be configured to generate serialization logic for instances of the example JsonMessage type. Note that the class name JsonContext is arbitrary. You can use any class name you want for the generated source.

using System.Text.Json.Serialization;

namespace Test
{
    [JsonSerializable(typeof(JsonMessage)]
    internal partial class JsonContext : JsonSerializerContext
    {
    }
}

A serializer call using this pattern might look like the following example. This example provides the best possible performance.

using MemoryStream ms = new();

using Utf8JsonWriter writer = new(ms);

JsonSerializer.Serialize(jsonMessage, JsonContext.Default.JsonMessage);

writer.Flush();

// Writer contains:

// {"Message":"Hello, world!"}

The fastest and most optimized source code generation mode – based on Utf8JsonWriter – is currently only available for serialization. Utf8JsonReader may provide similar support for deserialization in the future based on your feedback.

The source generator also emits type metadata initialization logic, which also facilitates deserialization. To deserialize a JsonMessage instance that uses pre-generated type metadata, you can do the following:

JsonSerializer.Deserialize(json, JsonContext.Default.JsonMessage);

JsonSerializer supports IAsyncEnumerable

You can now (de)serialize IAsyncEnumerable using System.Text.Json<T> JSON array. The following examples use streams as a representation of any asynchronous data source. The source can be a file on the local computer, or the result of a database query or a Web service API call.

JsonSerializer.SerializeAsync has been updated to recognize and provide special handling for IAsyncEnumerable values.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text.Json;

static async IAsyncEnumerable<int> PrintNumbers(int n)
{
    for (int i = 0; i < n; i++) yield return i;
}

using Stream stream = Console.OpenStandardOutput();
var data = new { Data = PrintNumbers(3) };
await JsonSerializer.SerializeAsync(stream, data); // prints {"Data":[0,1,2]}

IAsyncEnumerable only supports values ​​using an asynchronous serialization method. Attempts to serialize using a synchronous method will result in a NotSupportedException being thrown.

Streaming deserialization requires a new API to return IAsyncEnumerable<T> . We added the JsonSerializer.DeserializeAsyncEnumerable method for this, as you can see in the following example.

using System;
using System.IO;
using System.Text;
using System.Text.Json;

var stream = new MemoryStream(Encoding.UTF8.GetBytes("[0,1,2,3,4]"));
await foreach (int item in JsonSerializer.DeserializeAsyncEnumerable<int>(stream))
{
    Console.WriteLine(item);
}

This example will deserialize elements on demand and is useful when working with particularly large data streams. It only supports reading from root-level JSON arrays, although this may be relaxed in the future based on feedback.

The existing DeserializeAsync method nominally supports IAsyncEnumerable<T> , but within the scope of its non-stream method signature. It must return the final result as a single value, as shown in the following example.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Text.Json;

var stream = new MemoryStream(Encoding.UTF8.GetBytes(@"{""Data"":[0,1,2,3,4]}"));
var result = await JsonSerializer.DeserializeAsync<MyPoco>(stream);
await foreach (int item in result.Data)
{
    Console.WriteLine(item);
}

public class MyPoco
{
    public IAsyncEnumerable<int> Data { get; set; }
}

In this example, the deserializer buffers all IAsyncEnumerable contents in memory before returning the deserialized object. This is because the deserializer needs to consume the entire JSON value before returning the result.

System.Text.Json: Writable DOM functionality

The Writable JSON DOM feature adds a new simple and performant programming model to System.Text.Json. This new API is attractive because it avoids the need for strongly typed serialization contracts, and the DOM is mutable compared to the existing JsonDocument type.

This new API has the following benefits:

A lightweight alternative to serialization in cases where using POCO types is not possible or desired, or when the JSON schema is not fixed and must be checked.
Enable efficient modification of subsets of large trees. For example, it is possible to efficiently navigate to a subsection of a large JSON tree and read an array or deserialize a POCO from that subsection. LINQ can also be used with it.
The following example demonstrates the new programming model.

    // Parse a JSON object
    JsonNode jNode = JsonNode.Parse("{"MyProperty":42}");
    int value = (int)jNode["MyProperty"];
    Debug.Assert(value == 42);
    // or
    value = jNode["MyProperty"].GetValue<int>();
    Debug.Assert(value == 42);

    // Parse a JSON array
    jNode = JsonNode.Parse("[10,11,12]");
    value = (int)jNode[1];
    Debug.Assert(value == 11);
    // or
    value = jNode[1].GetValue<int>();
    Debug.Assert(value == 11);

    // Create a new JsonObject using object initializers and array params
    var jObject = new JsonObject
    {
        ["MyChildObject"] = new JsonObject
        {
            ["MyProperty"] = "Hello",
            ["MyArray"] = new JsonArray(10, 11, 12)
        }
    };

    // Obtain the JSON from the new JsonObject
    string json = jObject.ToJsonString();
    Console.WriteLine(json); // {"MyChildObject":{"MyProperty":"Hello","MyArray":[10,11,12]}}

    // Indexers for property names and array elements are supported and can be chained
Debug.Assert(jObject["MyChildObject"]["MyArray"][1].GetValue<int>() == 11);

ReferenceHandler.IgnoreCycles

JsonSerializer(System.Text.Json) now supports the ability to ignore loops when serializing object graphs. The ReferenceHandler.IgnoreCycles option has similar behavior to Newtonsoft.Json ReferenceLoopHandling.Ignore. A key difference is that the System.Text.Json implementation replaces reference cycles with null JSON tokens, rather than ignoring object references.

You can see ReferenceHandler.IgnoreCycles in action in the following example. In this case, the Next property is serialized as null because otherwise it would create a loop.

class Node
{
    public string Description { get; set; }
    public object Next { get; set; }
}

void Test()
{
    var node = new Node { Description = "Node 1" };
    node.Next = node;

    var opts = new JsonSerializerOptions { ReferenceHandler = ReferenceHandler.IgnoreCycles };

    string json = JsonSerializer.Serialize(node, opts);
    Console.WriteLine(json); // Prints {"Description":"Node 1","Next":null}
}

build from source

Building from source allows you to build the .NET SDK from source on your own computer with just a few commands. Let me explain why this project is important.

Build from source is a scenario and an infrastructure we’ve been working with Red Hat on until the release of .NET Core 1.0. After a few years, we are very close to delivering a fully automatic version of it. This feature is important for Red Hat Enterprise Linux (RHEL) .NET users. Red Hat told us that .NET has grown into an important developer platform for its ecosystem. OK!

The gold standard for Linux distributions is to build open source with compilers and toolchains that are part of the distribution archive. This works for the .NET runtime (written in C++), but not for any code written in C#. For C# code, we use a two-pass build mechanism to meet release requirements. It’s a bit complicated, but it’s important to understand the process.

Red Hat uses the Microsoft binary build of the .NET SDK (#1) to build the .NET SDK source code to produce a pure open source binary build of the SDK (#2). Afterwards, the same SDK source code is built again using this new version of the SDK (#2) to produce a provably open-source SDK (#3). The final binary release of the .NET SDK (#3) will then be available to RHEL users. Later, Red Hat can use the same SDK (#3) to build new .NET releases instead of using the Microsoft SDK to build monthly updates.

This process can be surprising and confusing. Open source distributions need to be built with open source tools. This mode ensures that Microsoft-built SDKs are not required, whether intentionally or not. As a developer platform, the bar for inclusion in a distribution is higher than just using a compatible license. Building projects from source enables .NET to meet this standard.

The deliverable for a source build is a source tarball. A source tarball contains all of the SDK’s sources (for a given release). From there, Red Hat (or another organization) can build their own version of the SDK. Red Hat policy requires the use of the built-in source toolchain to generate binary tarballs, which is why they use a two-pass approach. But the source code build itself does not require this two-pass approach.

In the Linux ecosystem, it is very common to have both source and binary packages or tarballs for a given component. We already have the binary tarballs available and now also the source tarballs. This makes .NET match the standard component pattern.

The big improvement in .NET 6 is that the source tarball is now what we build. It used to be labor-intensive to produce, which also resulted in long delays in delivering the source tarballs to Red Hat. Neither side is happy with this.

We have worked closely with Red Hat on this project for over five years. Its success is due in large part to the efforts of the brilliant Red Hat engineers we’ve had the privilege of working with. Other distributions and organizations have and will benefit from their efforts.

As a side note, building from source is a huge step towards reproducible builds, and we strongly believe in that. The .NET SDK and C# compiler have significant reproducible build capabilities.

Library API

In addition to the APIs already covered, the following APIs have been added.

WebSocket Compression

Compression is important for any data transmitted over a network. WebSockets now enable compression. We use the extended permessage-deflate implementation of WebSockets, RFC 7692. It allows compression of WebSockets message payloads using the DEFLATE algorithm. This feature was one of Networking’s leading user requests on GitHub.

Compression used with encryption can lead to attacks such as CRIME and BREACH. This means that a secret cannot be sent with user-generated data in a single compressed context, which could otherwise be extracted. To make users aware of these implications and help them weigh the risks, we named one of the key APIs DangerousDeflateOptions. We’ve also added the ability to turn off compression for specific messages, so if a user wants to send a secret, they can safely do so without compression.

The memory footprint of WebSocket is reduced by about 27% when compression is disabled.

Enabling compression from the client is easy, as shown in the example below. However, keep in mind that servers can negotiate settings such as requesting smaller windows or refusing compression entirely.

var cws = new ClientWebSocket();
cws.Options.DangerousDeflateOptions = new WebSocketDeflateOptions()
{
    ClientMaxWindowBits = 10,
    ServerMaxWindowBits = 10
};

WebSocket compression support for ASP.NET Core has also been added.

Credit to Ivan Zratanov.

Socks proxy support

SOCKS is a proxy server implementation that can handle any TCP or UDP traffic, making it a very versatile system. This is a long-standing community request that was added in .NET 6.

This change adds support for Socks4, Socks4a, and Socks5. For example, it can test external connections via SSH or connect to the Tor network.

This class WebProxy now accepts the socks scheme, as shown in the following example.

var handler = new HttpClientHandler
{
    Proxy = new WebProxy("socks5://127.0.0.1", 9050)
};
var httpClient = new HttpClient(handler);

Credit to Huo yaoyuan.

Microsoft.Extensions.Hosting — configure hosting options API

We’ve added a new ConfigureHostOptions API on top of IHostBuilder to simplify application setup (e.g. configure shutdown timeout):

using HostBuilder host = new()
    .ConfigureHostOptions(o =>
    {
        o.ShutdownTimeout = TimeSpan.FromMinutes(10);
    })
    .Build();

host.Run();

In .NET 5, configuring host options is a bit more complicated:

using HostBuilder host = new()
    .ConfigureServices(services =>
    {
        services.Configure<HostOptions>(o =>
        {
            o.ShutdownTimeout = TimeSpan.FromMinutes(10);
        });
    })
    .Build();

host.Run();

Microsoft.Extensions.DependencyInjection — CreateAsyncScope API

The CreateAsyncScope API is created to handle the service’s disposal of IAsyncDisposable. Previously, you may have noticed that disposing of an IAsyncDisposable service provider may throw an InvalidOperationException.

The following example demonstrates the new mode, CreateAsyncScope for enabling safe use of using statements.

await using (var scope = provider.CreateAsyncScope())

{

    var foo = scope.ServiceProvider.GetRequiredService<Foo>();

}

The following example demonstrates an existing problem case:

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;

await using var provider = new ServiceCollection()
        .AddScoped<Foo>()
        .BuildServiceProvider();

// This using can throw InvalidOperationException
using (var scope = provider.CreateScope())
{
    var foo = scope.ServiceProvider.GetRequiredService<Foo>();
}

class Foo : IAsyncDisposable
{
    public ValueTask DisposeAsync() => default;
}

The following pattern is a previously suggested workaround to avoid exceptions. It is no longer needed.

var scope = provider.CreateScope();

var foo = scope.ServiceProvider.GetRequiredService<Foo>();

await ((IAsyncDisposable)scope).DisposeAsync();

Thanks to Martin Björkström.

Microsoft.Extensions.Logging — compile-time source generator

.NET 6 introduces the LoggerMessageAttribute type. This attribute is part of the Microsoft.Extensions.Logging namespace, and when used, it sources a high-performance logging API. Source-generated logging support is designed to provide a highly available and performant logging solution for modern .NET applications. The automatically generated source code relies on the ILogger interface and the LoggerMessage.Define function.

LoggerMessageAttribute source generator fires when used in partial logging methods. When triggered, it can either automatically generate an implementation of the partial method it is decorating, or generate compile-time diagnostics with hints about proper usage. Compile-time logging solutions are often much faster at runtime than existing logging methods. It does this by minimizing boxing, temporary allocations, and copies.

Compared with directly using the LoggerMessage.Define API manually, it has the following advantages:

  • Shorter and simpler syntax: declarative attribute usage instead of coding boilerplate.
  • Guided developer experience: Generators issue warnings to help developers do the right thing.
  • Any number of logging parameters are supported. LoggerMessage.Define supports up to six.
  • Supports dynamic log levels. This is not possible with LoggerMessage.Define alone.

To use LoggerMessageAttribute, the consumer class and method need to be partial. The code generator triggers at compile time and generates the implementation of the partial method.

public static partial class Log
{
    [LoggerMessage(EventId = 0, Level = LogLevel.Critical, Message = "Could not open socket to `{hostName}`")]
    public static partial void CouldNotOpenSocket(ILogger logger, string hostName);
}

In the preceding example, the logging method is static and the log level is specified in the property definition. When using properties in a static context, ILogger requires an instance as a parameter. You can also choose to use this property in a non-static context. For more examples and usage scenarios, visit the compile-time logging source generator documentation.

System.Linq — Enumerable supports Index and Range parameters

The Enumerable.ElementAt method now accepts an index from the end of the enumerable, as shown in the following example.

Enumerable.Range(1, 10).ElementAt(^2); // returns 9

Added an overload of Enumerable.Take that accepts a Range parameter. It simplifies slicing over enumerable sequences:

  • source.Take(..3) alternative source.Take(3)
  • source.Take(3..) alternative source.Skip(3)
  • source.Take(2..7) alternative source.Take(7).Skip(2)
  • source.Take(^3..) instead of source.TakeLast(3)
  • source.Take(..^3) alternative source.SkipLast(3)
  • source.Take(^7..^3) instead of .source.TakeLast(7).SkipLast(3)

Thanks @dixin.

System.Linq —TryGetNonEnumeratedCount

The TryGetNonEnumeratedCount method attempts to get the source enumerable count without coercing enumeration. This approach is useful in scenarios where it is useful to pre-allocate a buffer before enumeration, as shown in the following example.

List<T> buffer = source.TryGetNonEnumeratedCount(out int count) ? new List<T>(capacity: count) : new List<T>();

foreach (T item in source)

{

    buffer.Add(item);

}

TryGetNonEnumeratedCount checks for implementing ICollection/ICollection<T> Or take advantage of some of the internally optimized sources that Linq employs.

System.Linq — DistinctBy/ UnionBy/ IntersectBy/ExceptBy

New variants have been added to set operations that allow specifying equality using key selector functions, as shown in the example below.

Enumerable.Range(1, 20).DistinctBy(x => x % 3); // {1, 2, 3}

var first = new (string Name, int Age)[] { ("Francis", 20), ("Lindsey", 30), ("Ashley", 40) };

var second = new (string Name, int Age)[] { ("Claire", 30), ("Pat", 30), ("Drew", 33) };

first.UnionBy(second, person => person.Age); // { ("Francis", 20), ("Lindsey", 30), ("Ashley", 40), ("Drew", 33) }

System.Linq – MaxBy/MinBy

The MaxBy and MinBy methods allow finding the largest or smallest element using a key selector, as shown in the following example.

var people = new (string Name, int Age)[] { (“Francis”, 20), (“Lindsey”, 30), (“Ashley”, 40) };

people.MaxBy(person => person.Age); // (“Ashley”, 40)

System.Linq —Chunk

Chunks can be used to chunk an enumerable source into fixed-size slices, as shown in the example below.

IEnumerable<int[]> chunks = Enumerable.Range(0, 10).Chunk(size: 3); // { {0,1,2}, {3,4,5}, {6,7,8}, {9} }

Credit goes to Robert Anderson.

System.Linq—— //FirstOrDefault takes default parameter overload LastOrDefaultSingleOrDefault

The existing FirstOrDefault / LastOrDefault /SingleOrDefault methods return default(T) if the source enumerable is empty. New overloads have been added that accept default arguments returned in this case, as shown in the following example.

Enumerable.Empty<int>().SingleOrDefault(-1); // returns -1

Thanks @Foxtrek64.

System.Linq — Zip accepts three enumerable overloads

The Zip method now supports combining three enumerations, as shown in the following example.

var xs = Enumerable.Range(1, 10);

var ys = xs.Select(x => x.ToString());

var zs = xs.Select(x => x % 2 == 0);

foreach ((int x, string y, bool z) in Enumerable.Zip(xs,ys,zs))

{

}

Credit to Huo yaoyuan.

priority queue

PriorityQueue<TElement, TPriority> (System.Collections.Generic) is a new collection to which new items can be added with value and priority. When dequeuing, PriorityQueue returns the element with the lowest priority value. You can think of this new collection as similar to Queue<T> But each enqueued element has a priority value that affects dequeue behavior.

The following example demonstrates the .PriorityQueue<string, int>

// creates a priority queue of strings with integer priorities

var pq = new PriorityQueue<string, int>();

// enqueue elements with associated priorities

pq.Enqueue(“A”, 3);

pq.Enqueue(“B”, 1);

pq.Enqueue(“C”, 2);

pq.Enqueue(“D”, 3);

pq.Dequeue(); // returns “B”

pq.Dequeue(); // returns “C”

pq.Dequeue(); // either “A” or “D”, stability is not guaranteed.

Credit to Patryk Golebiowski.

Faster processing of structs as dictionary values

CollectionsMarshal.GetValueRef is a new unsafe API that allows for faster updates of structure values ​​in dictionaries. The new API is intended for high-performance scenarios, not general use. It returns the ref structure value, which can then be updated using typical techniques.

The following example demonstrates how to use the new API:

ref MyStruct value = CollectionsMarshal.GetValueRef(dictionary, key); // Returns Unsafe.NullRef<TValue>() if it doesn’t exist; check using Unsafe.IsNullRef(ref value) if (!Unsafe.IsNullRef(ref value)) {     // Mutate in-place     value.MyInt++; }

Before this change, updating a struct dictionary value could be expensive for high-performance scenarios, requiring a dictionary lookup and a copy of the struct to the stack. Then after changing the struct, it will again be assigned to the dictionary key, causing another lookup and copy operation . This improvement reduces the key hash to 1 (from 2) and removes all structure copy operations.

Credit to Ben Adams.

New DateOnly and TimeOnly structures

A date and time-only structure was added, with the following characteristics:

  • Each represents half of a DateTime, or just the date part, or just the time part.
  • DateOnly is perfect for birthdays, anniversaries, and weekdays. It is consistent with the SQL Server date type.
  • TimeOnly is great for recurring meetings, alarm clocks, and weekly work hours. It is consistent with the time type of SQL Server.
  • Complements existing date/time types (DateTime, DateTimeOffset, TimeSpan, TimeZoneInfo).
  • In the System namespace, provided in CoreLib, just like existing related types.

Performance improvements DateTime.UtcNow

This improvement has the following benefits:

  • Fixed a 2.5x performance regression in getting system time on Windows.
  • Utilizes the 5-minute sliding cache of Windows leap second data instead of fetching it on every call.
  • Supports Windows and IANA timezones on all platforms

This improvement has the following benefits:

  • Implicit conversion when using (https://github.com/dotnet/run…) TimeZoneInfo.FindSystemTimeZoneById
  • TimeZoneInfo is explicitly converted via the new API on: TryConvertIanaIdToWindowsId, TryConvertWindowsIdToIanaId and HasIanaId (https://github.com/dotnet/run…)
  • Improved cross-platform support and interoperability between systems using different time zone types.
  • Removed need to use TimeZoneConverter OSS library. This functionality is now built-in.

Improved time zone display names

Time zone display names on Unix have been improved:

  • Disambiguate the display names in the list returned by . TimeZoneInfo. GetSystemTimeZones
  • Leverage ICU/CLDR globalization data.
  • Applies to Unix only. Windows still uses registry data. This may change in the future.
  • The following additional improvements have also been made:
  • The display and standard names for the UTC time zone are hardcoded in English, and now use the same language as the rest of the time zone data (CurrentUICulture on Unix, OS default language on Windows).
  • Due to size constraints, time zone display names in Wasm were changed to use non-localized IANA IDs.
  • The TimeZoneInfo.AdjustmentRule nested class exposes its BaseUtcOffsetDelta internal property and gets a new constructor that takes baseUtcOffsetDelta as a parameter. (https://github.com/dotnet/run…)
  • TimeZoneInfo.AdjustmentRule also got various fixes for loading timezones on Unix (https://github.com/dotnet/run…), (https://github.com/dotnet/run…)

Improved support for Windows ACLs

System.Threading.AccessControl now includes improved support for interacting with Windows Access Control Lists (ACLs). New overloads were added to Mutex and Semaphore’s OpenExisting and TryOpenExisting methods EventWaitHandle. These overloads with “security rights” instance allow opening an existing instance of a thread synchronization object created using special Windows security attributes.

This update matches the API available in the .NET Framework and has the same behavior.

The following examples demonstrate how to use these new APIs.

For Mutex:

var rights = MutexRights.FullControl; string mutexName = “MyMutexName”; var security = new MutexSecurity(); SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null); MutexAccessRule accessRule = new MutexAccessRule(identity, rights, AccessControlType.Allow); security.AddAccessRule(accessRule); // createdMutex, openedMutex1 and openedMutex2 point to the same mutex Mutex createdMutex = MutexAcl.Create(initiallyOwned: true, mutexName, out bool createdNew, security); Mutex openedMutex1 = MutexAcl.OpenExisting(mutexName, rights); MutexAcl.TryOpenExisting(mutexName, rights, out Mutex openedMutex2);

For Semaphore

var rights = SemaphoreRights.FullControl; string semaphoreName = “MySemaphoreName”; var security = new SemaphoreSecurity(); SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null); SemaphoreAccessRule accessRule = new SemaphoreAccessRule(identity, rights, AccessControlType.Allow); security.AddAccessRule(accessRule); // createdSemaphore, openedSemaphore1 and openedSemaphore2 point to the same semaphore Semaphore createdSemaphore = SemaphoreAcl.Create(initialCount: 1,  maximumCount: 3, semaphoreName, out bool createdNew, security); Semaphore openedSemaphore1 = SemaphoreAcl.OpenExisting(semaphoreName, rights); SemaphoreAcl.TryOpenExisting(semaphoreName, rights, out Semaphore openedSemaphore2);

For EventWaitHandle

var rights = EventWaitHandleRights.FullControl; string eventWaitHandleName = “MyEventWaitHandleName”; var security = new EventWaitHandleSecurity(); SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null); EventWaitHandleAccessRule accessRule = new EventWaitHandleAccessRule(identity, rights, AccessControlType.Allow); security.AddAccessRule(accessRule); // createdHandle, openedHandle1 and openedHandle2 point to the same event wait handle EventWaitHandle createdHandle = EventWaitHandleAcl.Create(initialState: true, EventResetMode.AutoReset, eventWaitHandleName, out bool createdNew, security); EventWaitHandle openedHandle1 = EventWaitHandleAcl.OpenExisting(eventWaitHandleName, rights); EventWaitHandleAcl.TryOpenExisting(eventWaitHandleName, rights, out EventWaitHandle openedHandle2);

HMAC one-shot method

The System.Security.Cryptography HMAC class now has static methods that allow computing the HMAC in one go without allocation. These additions are similar to the one-shot method for hash generation that was added in previous versions.

DependentHandle is now public

The DependentHandle type is now public, with the following API surface:

namespace System.Runtime {     public struct DependentHandle : IDisposable     {         public DependentHandle(object? target, object? dependent);         public bool IsAllocated { get; }         public object? Target { get; set; }         public object? Dependent { get; set; }         public (object? Target, object? Dependent) TargetAndDependent { get; }         public void Dispose();     } }

It can be used to create advanced systems such as complex caching systems or ConditionalWeakTable<TKey, TValue> A custom version of the type. For example, it will be used by the WeakReferenceMessenger type in the MVVM Toolkit to avoid allocating memory when broadcasting messages.

Portable Thread Pool

The .NET thread pool has been reimplemented as a managed implementation and is now used as the default thread pool in .NET 6. We made this change so that all .NET applications can access the same thread pool, regardless of whether CoreCLR, Mono, or any other runtime is being used. We did not observe or anticipate any functional or performance impact as part of this change.

RyuJIT

The team made many improvements to the .NET JIT compiler in this release, documented in each preview post. Most of these changes improve performance. Here are some highlights of RyuJIT.

Dynamic PGO

In .NET 6, we enabled two forms of PGO (Profile Guided Optimization):

  • Dynamic PGO uses data collected from the current run to optimize the current run.
  • Static PGO relies on data collected from past runs to optimize future runs.

Dynamic PGO has been introduced in the performance section earlier in the article. I will provide a recap.

Dynamic PGO enables the JIT to gather information at runtime about the code paths and types actually used for a particular application run. The JIT can then optimize code based on these code paths, sometimes improving performance dramatically. We’ve seen double-digit health improvements in both test and production. There is a classic set of compiler techniques that cannot be implemented without PGO using JIT or ahead-of-time compilation. We are now able to apply these technologies. Hot/cold separation is one such technique, and devirtualization is another.

To enable dynamic PGO, set DOTNET_TieredPGO=1 in the environment where the application will run.

As mentioned in the performance section, Dynamic PGO increased the number of requests per second for the TechEmpower JSON “MVC” suite by 26% (510K -> 640K). This is an amazing improvement, no code changes required.

Our goal is to have dynamic PGO enabled by default in future .NET versions, hopefully in .NET 7. We strongly encourage you to try Dynamic PGO in the app and give us feedback.

Complete PGO

To take full advantage of Dynamic PGO, you can set two additional environment variables: DOTNET_TC_QuickJitForLoops=1 and DOTNET_ReadyToRun=0. This ensures that as many methods as possible participate in tiered compilation. We refer to this variant as Full PGO. Full PGO can provide a greater steady-state performance advantage than dynamic PGO, but at the expense of slower startup time (because more methods must be run at layer 0).

You don’t want to use this option for short-running serverless applications, but it might make sense for long-running applications.

In future releases, we plan to streamline and simplify these options so that you can more easily reap the benefits of full PGO and use for a wider range of applications.

Static PGO

We currently use static PGO to optimize .NET library assemblies, such as the assembly System.Private.CoreLib that ships with R2R (Ready To Run).

The benefit of static PGO is that it optimizes when crossgen is used to compile the assembly into R2R format. This means there are runtime benefits without runtime costs. This is very important, and why PGO is important for C++, eg.

loop alignment

Memory alignment is a common requirement for various operations in modern computing. In .NET 5, we started aligning methods on 32-byte boundaries. In .NET 6, we added a feature that performs adaptive loop alignment, which adds NOP padding instructions in methods with loops so that the loop code starts at a mod(16) or mod(32) memory address. These changes improve and stabilize the performance of .NET code.

In the bubble sort diagram below, data point 1 represents the point where we start aligning the method on a 32-byte boundary. Data point 2 represents the point where we also start aligning the inner loop. As you can see, the performance and stability of the benchmarks have improved a lot.

.NET 6 History Super Complete Raiders

Hardware Acceleration Architecture

Structs are an important part of the CLR type system. In recent years, they have been used frequently as performance primitives throughout the .NET library. The closest examples of ValueTask are ValueTuple and Span<T> . The record structure is a new example. In .NET 5 and .NET 6, we’ve been improving the performance of structs, in part by ensuring that structs can be kept in ultra-fast CPU registers when they’re local variables, parameters, or return values ​​from methods). This is especially useful for APIs that use vector computations.

Stable performance measurement

There’s a ton of engineering systems work on the team that never makes it to the blog. This is true for any hardware or software product you use. The JIT team embarked on a project to stabilize performance measurements with the goal of increasing the regression values ​​automatically reported by our internal performance lab automation. This project is interesting because deep investigation and product changes are required to achieve stability. It also demonstrates the scale we measure to maintain and improve performance.

.NET 6 History Super Complete Raiders

This image demonstrates erratic performance measurements where performance fluctuates between slow and fast in successive runs. The x-axis is the test date and the y-axis is the test time in nanoseconds. By the end of the graph (after committing these changes), you can see that the measurements have stabilized with the best results. This image shows a single test. There are more tests that demonstrate similar behavior in dotnet/runtime #43227.

Ready-to-use code/Crossgen 2

Crossgen2 is a replacement for the crossgen tool. It aims to satisfy two results:

  • Make crossgen development more efficient.
  • Enables a set of features that are currently not possible through crossgen.

This conversion is somewhat similar to the native code csc.exe to managed code Roslyn compiler. Crossgen2 is written in C#, but it doesn’t expose a fancy API like Roslyn does.

We probably have/planned half a dozen projects for .NET 6 and 7 that depend on crossgen2. The vector instruction default proposal is a good example of the crossgen2 feature and product changes we hope to make for .NET 6, but more likely .NET 7. Version bubbles are another great example.

Crossgen2 supports cross-compilation across operating system and architecture dimensions (hence the name “crossgen”). This means that you will be able to use a single build machine to generate native code for all targets, at least as far as ready-to-run code is concerned. However, running and testing that code is another matter, and for that you need the right hardware and operating system.

The first step is to compile the platform itself with crossgen2. We did all the architecture tasks using .NET 6. Therefore, we were able to retire the old crossgen in this release. Note that crossgen2 is only available for CoreCLR, not for Mono-based applications (which have a separate set of code generation tools).

The project – at least at first – was not performance oriented. The goal is to enable a better architecture to host a RyuJIT (or any other) compiler to generate code in an offline fashion (no need or startup of the runtime).

You might say “hey…if it’s written in C#, don’t you need to start the runtime to run crossgen2?” Yes, but that’s not what “offline” means in this article. When crossgen2 runs, we don’t use the JIT that comes with the runtime running crossgen2 to generate ready-to-run (R2R) code. That doesn’t work, at least not for our purposes. Imagine crossgen2 running on an x64 machine and we need to generate code for Arm64. Crossgen2 loads the Arm64 RyuJIT (compiled for x64) as a native plugin and uses it to generate Arm64 R2R code. Machine instructions are just a stream of bytes saved to a file. It also works in the opposite direction. On Arm64, crossgen2 can generate x64 code using x64 RyuJIT compiled for Arm64. We use the same approach to target x64 code on x64 machines. Crossgen2 loads a RyuJIT built for whatever configuration is needed. This may seem complicated, but it’s the kind of system you need if you want to enable seamless cross-positioning models, and that’s exactly what we wanted.

We want to use the term “crossgen2” only for one release, after which it will replace the existing crossgen, and then we will go back to using the term “crossgen” for “crossgen2”.

.NET Diagnostics: EventPipe

EventPipe is our cross-platform mechanism for outputting events, performance data, and counters in-process or out-of-process. Starting with .NET 6, we’ve moved the implementation from C++ to C. With this change, Mono also uses EventPipe. This means that both CoreCLR and Mono use the same event infrastructure, including the .NET diagnostic CLI tools.

This change is accompanied by a small reduction in CoreCLR:
.NET 6 History Super Complete Raiders

We’ve also made some changes to improve the throughput of the EventPipe under load. During the first few preview releases, we made a series of changes that resulted in a 2.06x increase in throughput over .NET 5:
.NET 6 History Super Complete Raiders

For this benchmark, higher is better. .NET 6 is the orange line, .NET 5 is the blue line.

SDK

The following improvements have been made to the .NET SDK.

CLI installation of the .NET 6 SDK optional workload

.NET 6 introduced the concept of SDK workloads. Workloads are optional components that can be installed on top of the .NET SDK to enable various scenarios. The new workloads in .NET 6 are: .NET MAUI and Blazor WebAssembly AOT workloads. We may create new workloads (possibly from existing SDKs) in .NET 7. The biggest benefit of workloads is size reduction and optionality. We hope to keep the SDK smaller over time and only install the components you need. This model is good for developer machines, and even better for CI.

Visual Studio users don’t really need to worry about workloads. The workload feature is designed so that an installation coordinator like Visual Studio can install the workload for you. Workloads can be managed directly through the CLI.

The workload functionality exposes several verbs for managing workloads, including the following:

  • dotnet workload restore—Installs the required workload for a given project.
  • dotnet workload install—installs the named workload.
  • dotnet workload list—Lists your installed workloads.
  • dotnet workload update—updates all installed workloads to the latest available version.

The update verb query updates nuget.org’s workload manifest, updates the local manifest, downloads the new version of the installed workload, and then deletes all older versions of the workload. This is similar to apt update && apt upgrade -y (for Debian-based Linux distributions). It makes sense to think of the workload as the SDK’s private package manager. It is private because it only applies to SDK components. We may reconsider this in the future. These dotnet workload commands run in the context of the given SDK. Let’s say you have both .NET 6 and .NET 7 installed. Workload commands will give different results for each SDK because workloads will be different (at least different versions of the same workload).

Note that workloads from NuGet.org are copied into your SDK installation, so dotnet workload install needs to be run elevated or with sudo if the SDK install location is protected (ie in admin/root).

Built-in SDK version check

To make it easier to keep track of when new versions of the SDK and runtime are available, we’ve added a new command to the .NET 6 SDK.

dotnet sdk check

It will tell you if newer versions are available for any .NET SDKs, runtimes, or workloads you have installed. You can see the new experience in the image below.

.NET 6 History Super Complete Raiders

dotnet new

You can now search in NuGet.org with .dotnet new –search

Other improvements to template installation include support for switching to support authorization credentials for private NuGet sources. –interactive

After installing the CLI template, you can pass through and check if updates are available. –update-check –update-apply

NuGet Package Validation

The package validation tool enables NuGet library developers to verify that their packages are consistent and well-formed.

This includes:

  • Verify that there are no breaking changes between releases.
  • Verify that the package has the same set of public APIs for all runtime-specific implementations.
  • Identify any target framework or runtime applicability gaps.

This tool is part of the SDK. The easiest way to use it is to set a new property in the project file.

<EnablePackageValidation> true </EnablePackageValidation>

More Roslyn Analyzers

In .NET 5, we provide about 250 analyzers with the .NET SDK. Many of these already exist, but are shipped out-of-band as NuGet packages. We added more profilers for .NET 6.

Most new analyzers are enabled at the info level by default. You can enable these analyzers at warning level by configuring the analysis mode as follows:<AnalysisMode> all</AnalysisMode>

We shipped the set of analyzers we wanted for .NET 6 (plus a few extras), and then made most of them crawlable. The community added several implementations, including these.

.NET 6 History Super Complete Raiders

Thanks to Meik Tranel and Newell Clark.

Enable custom guards for Platform Compatibility Analyzer

CA1416 The Platform Compatibility Analyzer already uses methods in OperatingSystem and RuntimeInformation to identify platform guards, such as OperatingSystem.IsWindows and OperatingSystem.IsWindowsVersionAtLeast. However, the analyzer cannot identify any other protection possibilities, such as platform check results cached in fields or properties, or complex platform check logic defined in helper methods.

To allow the possibility to customize guards, we added new properties SupportedOSPlatformGuard and UnsupportedOSPlatformGuard to customize the guard member with the corresponding platform name and/or version annotation. This annotation is recognized and honored by the platform compatibility analyzer’s flow analysis logic.

usage

    [UnsupportedOSPlatformGuard("browser")] // The platform guard attribute
#if TARGET_BROWSER
    internal bool IsSupported => false;
#else
    internal bool IsSupported => true;
#endif

    [UnsupportedOSPlatform("browser")]
    void ApiNotSupportedOnBrowser() { }

    void M1()
    {
        ApiNotSupportedOnBrowser();  // Warns: This call site is reachable on all platforms.'ApiNotSupportedOnBrowser()' is unsupported on: 'browser'

        if (IsSupported)
        {
            ApiNotSupportedOnBrowser();  // Not warn
        }
    }

    [SupportedOSPlatform("Windows")]
    [SupportedOSPlatform("Linux")]
    void ApiOnlyWorkOnWindowsLinux() { }

    [SupportedOSPlatformGuard("Linux")]
    [SupportedOSPlatformGuard("Windows")]
    private readonly bool _isWindowOrLinux = OperatingSystem.IsLinux() || OperatingSystem.IsWindows();

    void M2()
    {
        ApiOnlyWorkOnWindowsLinux();  // This call site is reachable on all platforms.'ApiOnlyWorkOnWindowsLinux()' is only supported on: 'Linux', 'Windows'.

        if (_isWindowOrLinux)
        {
            ApiOnlyWorkOnWindowsLinux();  // Not warn
        }
    }
}

Finish

Welcome to .NET 6. It’s another huge .NET release with lots of improvements in performance, functionality, usability, and security. We hope you find many improvements that ultimately make you more efficient and capable in your day-to-day development, and improve performance or reduce costs for applications in production. We’re already starting to hear good news from those who have started using .NET 6.

At Microsoft, we’re still in the early stages of .NET 6 deployment, with some key applications already in production, and many more to come in the weeks and months to come.

.NET 6 is our latest LTS release. We encourage everyone to move to it, especially if you’re using .NET 5. We expect it to be the fastest adopted version of .NET ever.

Thank you for being a .NET developer.

Click to learn more about .NET 6