Martin Costello's Blog
The blog of a software developer and tester.
Since 2017 I've been maintaining an Alexa skill, London Travel, that provides real-time information about the status of London Underground, London Overground, and the DLR (and the Elizabeth Line). The skill is an AWS Lambda function, originally implemented in Node.js, but since 2019 it has been implemented in .NET.
I've been using a custom runtime for the Lambda function instead of the .NET managed runtime. The main reason for this is that it lets me use any version of .NET, not just the ones that AWS support. This has allowed me to not only use pre-release versions of .NET for testing, but it also enables me to use the latest versions of .NET as soon as they are released. The only disadvantage of this approach is that I have to patch the version of .NET being used once a month for Patch Tuesday, but I have automation set up to do that for me, so the overhead of doing that is actually minimal 😎.
As part of the .NET 8 release, the .NET team has put a lot of effort into improving the breadth of the capability of the native AoT support. With .NET 8, many more use cases are supported for AoT, making the performance and size benefits of AoT available to more applications than before. The .NET team at AWS has also been working hard on ensuring that the various AWS SDK libraries are compatible with AoT, with the various NuGet packages now annotated (and tested) as being AoT compatible.
With all these changes, I was curious to see how much of a difference AoT would make to the performance of my Lambda function, so I decided to try it out. In this post I'll go through what I needed to change to allow publishing my Alexa skill as a native application, what I learned along the way, and the results of the changes to the function's runtime performance.
TL;DR: It's faster, smaller, and cheaper to run. 🚀🔥
Let's dive in!
With the release of .NET 8.0.0 and the end of the preview releases, my past week can be summed up by the following image:
This post is a bumper edition, covering three different releases:
I had intended to continue the post-per-preview series originally, but time got away from me with preview 7, plus there wasn't much to say about it, and then I went on holiday for two weeks just as release candidate 1 landed. Given release candidate 2 was released just a few days ago, instead I figured I'd just catch-up with myself and summarise everything in this one blog post instead!
Release Candidate 2 is also the last planned release before the final release of .NET 8 in November to coincide with .NET Conf 2023, so this is going to be the penultimate post in this series.
Following on from part 3 of this series, I've been continuing to upgrade my projects to .NET 8 - this time to preview 6. In this post I'll cover more experiences with the new source generators with this preview as well as a new feature of C# 12: primary constructors.
In the previous post of this series I described how with some GitHub Actions workflows we can reduce the amount of manual work required to test each preview of .NET 8 in our projects. With the infrastructure to do that set up we can now dig into some highlights of the things we found in our testing of .NET 8 itself in the preview releases available so far this year!
In part 1 of this series I recommended that you prepare to upgrade to .NET 8 and suggested that you start off by testing the preview releases. Testing the preview releases is a great way to get a head start on the upgrade process and to identify any issues sooner rather than later, but it does require an investment of your time from preview to preview each month.
Even if you don't want to test new functionality, you still need to download the new .NET SDK, update all the .NET SDK and NuGet package versions in your projects, and then test that everything still works (that's already automated at least, right?). This can be a time-consuming process over the course of a new .NET release, and it starts to become harder to scale if you want to test lots of different codebases with the latest preview of the next .NET release.
What if we could automate some of this process so that we only need to focus on the parts where we as humans really add value compared to the mechanical parts of an upgrade?
In part 2 of this series I'm going to explain how I've gone about automating the boring parts of the process of testing the latest .NET preview releases using GitHub Actions.
Another year, another new major version of .NET is coming - .NET 8, to be specific.
I write that like it's brand new information - it's been coming for a while, what with .NET 8 Preview 1 having being released in Feburary - but it's only recently occured to me to write this blog post series (yes, a series, more on that later).
As annouced a few releases ago, a new major version of .NET is released every November. These alternate between an odd-numbered Short Term Support (STS) release and an even-numbered Long Term Support (LTS) release (see here).
There's a nice graphic here from the .NET website that illustrates how things look today:
That means .NET 8 will be the next LTS release and supercede .NET 6 and also .NET 7 by the end of 2024.
But why should you upgrade to .NET 8? Staying supported and patched is the primary reason, but there's another reason that sounds much more compelling:
"The first thing that you can do to get free performance in your ASP.NET or .NET applications is to upgrade your .NET version."
I've recently completed upgrading a bunch of personal and work applications to ASP.NET Core 6, and now that the dust has finally settled on those efforts, I thought I'd look into a new feature of .NET 6 that I hadn't tried out yet - JSON source generators.
One of the benefits of the new JSON source generator for the System.Text.Json serializer is that it is more performant that the APIs introduced as part of .NET 5. This is because the serializer is able to leverage code that is compiled ahead-of-time (the source generator part) to serialize and deserialize objects to and from JSON without using reflection (which is relatively slow).
It sounds like that could give applications a performance boost at runtime, but how can we use the new JSON source generator with ASP.NET Core Minimal APIs?
This week GitHub Codespaces was made generally available for Teams and Enterprise, and coupled with the release of the ability to open any repository in Visual Studio Code in a web browser just by pressing
., I thought I'd give it a try with some existing projects. In the process I hit a few gotchas that took me a few hours to get to the bottom of. This post goes through some of those and how to resolve them.
To protect your POST resources in an ASP.NET Core application from Cross-Site Request Forgery (CSRF) an application developer would typically use the antiforgery features to require an antiforgery token and cookie are included in HTTP POST form requests.
A necessary downside of these protections is that they make it harder to integration test such resources, particularly in a headless manner. This is because the tests need to acquire the antiforgery token and cookie to be able to successfully pass the antiforgery protections on a resource that needs to be tested.
A typical approach for this is to scrape the HTML response from the application for the hidden form field token (often named
__RequestVerificationToken) using Regular Expressions and then using that, along with the cookie, in the request(s) the test(s) make. This can however make tests brittle to change, particularly if the UI is refactored.
In this blog post I'll discuss an alternate approach using ASP.NET Core Application Parts that can make such tests easier to author and maintain, allowing you to concentrate on the core logic of your tests, rather than boilerplate setup.