Designing Network Security To Combat Modern Threats

In this post, I want to discuss how one should design network security in Microsoft Azure, dispensing with past patterns and combatting threats that are crippling businesses today.

The Past

Network security did not change much for a very long time. The classic network design is focused on an edge firewall.”All the bad guys are trying to penetrate our network from the Internet” so we’ll put up a very strong wall at the edge. With that approach, you’ll commonly find the “DMZ” network; a place where things like web proxies and DNS proxies isolate interior users and services from the Internet.

The internal network might be made up of two/more VLANs. For example, one or more client device VLANs and a server VLAN. While the route between those VLANs might pass through the firewall, it probably didn’t; they really “routed” through a smart core switch stack and there was limited to no firewall isolation between those VLANs.

This network design is fertile soil for malware. Ports usually are not let open to attack on the edge firewall. Hackers aren’t normally going to brute force their way through a firewall. There are easier ways in such as:

  • Send an “invoice” PDF to the accounting department that delivers a trojan horse.
  • Impersonate someone, ideally someone that travels and shouts a lot, to convince a helpful IT person to reset a password.
  • Target users via phishing or spear phishing.
  • Cimpromise some upstream include that developers use and use it to attack from the servers.
  • Use a SQL injection attack to open a command prompt on an internal server.
  • And on and on and …

In each of those cases, the attack comes from within. The spread of the blast (the attack) is unfettered. The blast area (a term used to describe the spread of an attack) is the entire network.

Secure Zones To The Rescue!

Government agencies love a nice secure zone architecture. This is a design where sensitive systems, such as GDRP data or secrets are stored on an isolated network.

Some agencies will even create a whol duplicate network that is isolated, forcing users to have two PCs – one “regular” one on the Internet-connected network and a “secure” PC that is wired onto an isolated network with limited secret services.

Realistically, that isolated network is of little value to most, but if you have that extreme a need – then good luck. By the way, that won’t work in The Cloud 🙂 Back to the more regular secure zone …

A special VLAN will be deployed and firewall rules will block all traffic into and out of that secure zone. The user experience might be to use Citrix desktops, hosted in the secure zone, to access services and data in that secure zone. But then reality starts cracking holes in the firewall’s deny all rules. No line of business app lives alone. They all require data from somewhere. Or there are integrations. Printers must be used. Scanners need to scan and share data. And legacy apps often use:

  • Domain (ADDS) credentials (how many ports do you need for that!!!)
  • SMB (TCP 445) for data transfer and integration

Over time, “deny all” becomes a long list of allow * from X to *, and so on, with absolutely no help from the app vendors.

The theory is that if an attack is commenced, then the blast area will be limited to the client network and, if it reaches the servers, it will be limtied to the Internal network. But this design fails to understand that:

  • An attack can come from within. Consider the scneario where compromised runtimes are used or a SQL injection attack breaks out from a database server.
  • All the required integrations open up holes between the secure zone and the other networks, including those legacy protocols that things like ransomware live on.
  • If one workload in the secure zone is compromised, they all are because there is no network segmentation inside of the VLAN.

And eventually, the “secure zone” is no more secure than the Internal network.

Don’t Block The Internet!!!

I’m amazed how many organisations do not block outbound access to the Internet. It’s just such hard work to open up firewall rules for all these applications that have Internet dependencies. I can understand that for a client VLAN. But the server VLAN such be a controlled space – if it’s not known & controlled (i.e. governed) then it should not be permitted.

A modern attack, an advanced persistent threat (APT), isn’t just some dumb blast, grab, and run. It is a sneaky process of:

  1. Penetration
  2. Discovery, often manually controlled
  3. Spread, often manually controlled
  4. Steal
  5. Destroy/encrypt/etc

Once an APT gets in, it usually wants to call home to pull instructions down from a rogue IP address or compromised bot. When the APT wants to steal data, to be used as blackmail and/or to be sold on the Darknet, the malware will seek to upload data to the Internet. Both of these actions are taking advantage of the all-too-common open access to the Internet.

Azure is Different

Years of working with clients has taught me that there are three kinds of people when it comes to Azure networking:

  1. Those who managed on-premises networks: These folks struggle with Azure networking.
  2. Those who didn’t do on-premises networking, but knew what to ask for: These folks take to Azure networking quite quickly.
  3. Everyone else: Irrelevant to this topic

What makes Azure networking so difficult for the network admins? There is no cabling in the fabric – obviously there is cabling in the data centres but it’s all abstracted by the VXLAN software-defined networks. Packets are encapsulated on the source virtual machine’s host, transmitted over the physical network, decapstulated on the destination virtual machine host, and presented to the destination virtual machine’s NIC. In short, packets leave the source NIC and magically arrive on the destination NIC with no hops in between – this is why traceroute is pointless in Azure and why the default gateway doesn’t really exist.

I’m not going to use virtual machines, Aidan. I’m doing PaaS and serverless computing. In Azure, everything is based on virtual machines, unless they are explcitly hosted on physical hosts (Azure VMware Services and some SAP stuff, for example). Even Functions run on a VM somewhere hidden in the platform. Serverless means that you don’t need to manage it.

The software-defined thing is why:

  • Partitioned subnets for a firewall appliance (front, back, VPN, and management) offer nothing from a security perspective in Azure.
  • ICMP isn’t as useful as you’d imagine in Azure.
  • The concept of partitioning workloads for security using subnets is not as useful as you might think – it’s actually counter-productive over time.

Transformation

I like to remind people during a presentation or a project kickoff that going on a cloud journey is supposed to result in transformation. You now re-evaluate everything and find better ways to do old things using cloud-native concepts. And that applies to network security designs too.

Micro-Segmentation Is The Word

Forget “Greece”, get on board with what you need to counter today’s threats: micro-segmentation. This is a concept where:

  • We protect the edge, inbound and outbound, permitting only required traffic.
  • We apply network isolation within the workload, permitting only required traffic.
  • We route traffic between workloads through the edge firewall, , permitting only required traffic.

Yes, more work will be required when you migrate existing workloads to Azure. I’d suggest using Azure Migrate to map network flows. I never get to do that – I always get the “messy migration projects” and I never get to use Azure Migrate – so testing and accessing and understanding NSG Traffic Analytics and the Azure Firewall/firewall logs via KQL is a necessary skill.

Security Classification

Every workload should go through a security classification process. You need to weigh risk verus complexity. If you max the security, you will increase costs and difficulty for otherwise simple operations. For example, a dev won’t be able to connect Visual Studio straight to an App Service if you deploy that App Service on a private or isolated App Service Plan. You also will have to host your own DevOps agents/GitHub runners because the Microsoft-hosted containers won’t be able to reach your SCM endpoints.

Every piece of compute is a potential attack vector: a VM, an App Service, a Function, a Container, a Logic App. The question is, if it is compromised, will the attacker be able to jump to something else? Will the data that is accessible be secret, subject to regulation, or reputational damage?

This measurement process will determine if a workload should use resources that:

  • Have public endpoints (cheapest and easiest).
  • Use private endpoints (medium levels of cost, complexity, and security).
  • Use full VNet integration, such as an App Service Environment or a virtual machine (highest cost/complexity but most secure).

The Virtual Network & Subnet

Imagine you are building a 3-tier workload that will be isolated from the Internet using Azure virtual networking:

  • Web servers on the Internet
  • Middle tier
  • Databases

Not that long ago, we would have deployed that workload on 3 subnets, one for each tier. Then we would have built isolation using Network Security Groups (NSGs), one for each subnet. But you just learned that a SD-network routes packets directly from NIC to NIC. An NSG is a Hyper-V Port ACL that is implemented at the NIC, even if applied at the subnet level. We can create all the isolation we want using an NSG within the subnet. That means we can flatten the network design for the workload to one subnet. A subnet-associated subnet will restrict communications between the tiers – and ideally between nodes within the same tier. That level of isolation should block everything … should 🙂

Tips for virtual networks and subnets:

  • Deploy 1 virtual network per workload: Not only will this follow Azure Cloud Adoption Framework concepts, but it will help your overall security and governance design. Each workload is placed into a spoke virtual network and peered with a hub. The hub is used only for external connectivity, the firewall, and Azure Bastion (assuming this is not a vWAN hub).
  • Assign a single prefix to your hub & spoke: Firewall and NSG rules will be easier.
  • Keep the virtual newtorks small: Don’t waste your address space.
  • Flatten your subnets: Only deploy subnets when there is a technical need, for example VMs and private endpoints are in one subnet, VNet integration for an App Services plan is in another, a SQL managed instance, is in a third.

Resource Firewalls

It’s sad to see how many people disable operating system firewalls. For example, Group Policy is used to diable Windows Firewall. Don’t you know that Microsoft and Linux added those firewalls to protect machines from internal attacks? Those firewalls should remain operational and only permit required traffic.

Many Azure resources also offer firewalls. App Services have firewalls. Azure SQL has a firewall. Use them! The one messy resource is the storage account. The location of the endpoints for storage clusters is in a weird place – and this causes interesting situations. For example, a Logic App’s storage account with a configured firewall will prevent workflows from being created/working correctly.

Network Security Groups

Take a look at the default inbound rules in an NSG. You’ll find there is a Deny All rule which is the lowest possible priority. Just up from that rule, is a built in rule to allow traffic from VirtualNetwork. VirtualNetwork includes the subnet, the virtual network, and all routed networks, including peers and site-to-site connections. So all traffic from internal networks is … permitted! This is why every NSG that I create has a custom DenyAll rule with a priority of 4000. Higher priority rules are created to permit required traffic and only that required traffic.

Tips with your NSGs:

  • Use 1 NSG per subnet: Where the subnet resources will support an NSG. You will reduce your overall complexity and make troubleshooting easier. Remember, all NSG rules are actually applied at the source (outbound rules) or target (inbound rules) NIC.
  • Limit the use of “any”: Rules should be as accurate as possible. For example: Allow TCP 445 from source A to destination B.
  • Consider the use of Application Security Groups: You can abstract IP addresses with an Application Security Group (ASG) in an NSG rule. ASGs can be used with NICs – virtual machines and private endpoints.
  • Enable NSG Flow Logs & Traffic Analytics: Great for troubleshooting networking (not just firewall stuff) and for feeding data to a SIEM. VNet Flow Logs will be a superior replacement when it is ready for GA.

The Hub

As I’ve implied already, you should employ a hub & spoke design. The hub should be simple, small and free of compute. The hub:

  • Makes connections using site-to-site networking using SD-WAN, VPN, and/or ExpressRoute.
  • Hosts the firewall. The firewall blocks everything in every direction by default,
  • Hosts Azure Bastion, unless you are running Azure Virtual WAN – then deploy it to a spoke.
  • Is the “Public IP” for egress traffic for workloads trying to reach the Internet. All egress traffic is via the firewall. Azure Policy should be used to restrict Public IP Addresses to just those requires that require it – things like Azure Bastion require a public IP and you should create a policy override for each required resource ID.

My preference is to use Azure Firewall. That’s a long conversation so let’s move on to another topic; Azure Bastion.

Most folks will go into Azure thinking that they will RDP/SSH straight to their VMs. RDP and SSH are not perfect. This is something that the secure zone concept recognised. It was not unusual for admins/operators to use a bastion host to hop via RDP or SSH from their PC to the required server via another server. RDP/SSH were not open directly to the protected machines.

Azure Bastion should offer the same isolation. Your NSG rules should only permit RDP/SSH from:

  • The AzureBastionSubnet
  • Any other bastion hosts that might be employed, typically by developers who will deploy specialist tools.

Azure Bastion requires:

  • An Entra ID sign-in, ideally protected by features such as conditional access and MFA, to access the bastion service.
  • The destination machine’s credentials.

Routing

Now we get to one of my favourite topics in Azure. In the on-prem world we can control how packets get from A to B using cables. But as you’ve learned, we can run cables in Azure. But we can control the next hop of a packet.

We want to control flows:

  • Ingress from site-to-site networking to flow through the hub firewall: A route in the GatewaySubnet to use the hub firewall as the next hop.
  • All traffic leaving a spoke (workload virtual network) to flow through the hub firewall: A route to 0.0.0.0/0 using the firewall backend/private IP as the next hop.
  • All traffic between hub & spokes to flow through the remote hub firewall: A route to the remote hub & spoke IP prefix (see above tip) with a next hop of the remote hub firewall.

If you follow my tips, especially with the simple hub, then the routing is actually quite easy to implement and maintain.

Tips:

  • Keep the hub free of compute.
  • NSG Traffic Analytics helps to troubleshoot.

Web Application Firewall

The hub firewall shold not be used to present web applications to the Internet. If a web app is classified as requireing network security, then it should be reverse proxied using a Web Application Firewall (WAF). This specialised firewall inspects traffic at the application layer and can block threats.

The WAF will have a lot of false positives. Heavy traffic applications can produce a lot of false positives in your logs; in the case of Log Analytics, the ingestion charge can be huge so get to optimising those false positives as quickly as you can.

My preference is to route the WAF through the hub firewall to the backend applications. The WAF is a form of compte, even the Azure WAF. If you do not need end-to-end TLS, then the firewall could be used to inspect the HTTP traffic from the WAF to the backend using Intrusion Detection Prevention System (IDPS), offering another layer of protection.

Azure offers a couple of WAF options. Front Door with WAF is architecturally interesting, but the default design is that the backend has a public endpoint that limits access to your Front Door instance at the application layer. What if the backend is network connected for max protection? Then you get into complexities with Private Link/Private Endpoint.

A regional WAF is network connected and offers simpler networking, but it sacrifices the performance boosts from Front Door. You can combine Front Door with a regional WAF, but there are more costs with this.

Third party solutions are posisble Services such as Cloud Flare offer performance and security features. One could argue that Cloud Flare offers more features. From the performance perspective, keep in mind that Cloud Flare has only a few peering locations with the Microsoft WAN, so a remote user might have to take a detour to get to your Azure resources, increasing latency.

You can seek out WAF solutions from the likes of F5 and Citrix in the Azure Marketplace. Keep in mind that NVAs can continue skills challenges by siloing the skill – native cloud skills are easier to develop and contract/hire.

Summary

I was going to type something like “this post gives you a quick tour of the micro-segmentation approach/features that you can use in Azure” but then I reaslised that I’ve had keyboard diarrhea and this post is quite Sinofskian. What I’ve tried to explain is that the ways of the past:

  • Don’t do much for security anymore
  • Are actually more complex in architecture than Azure-native patterns and solutions that will work.

If you implement security at three layers, assuming that a breach will happen and could happen anywhere then you limit the blast area of a threat:

  • The edge, using the firewall and a WAF
  • The NIC, using a Network Security Group
  • The resource, using a guest OS/resource firewall

This trust-no-one approach that denies all but the minimum required traffic will make life much harder for an attacker. Including logging and the use of a well configured SIEM will create trip wires that an attacker must trip over to attempt an expansion. You will make their expansion harder & slower, and make it easier to detect them. You will also limit how much they can spread and how much the damage that the attack can create. Furthermore, you will be following the guidance the likes of the FBI are recommending.

There is so much more to consider when it comes to security, but I’ve focused on micro-segmentation in a network context. People do think about Entra ID and management solutions (such as Defender for Cloud and/or SIEM) but they rarely think through the network design by assuming that what they did on-prem will still be fine. It won’t because on-prem isn’t fine right now! So take my advice, transform your network, and protect your assets, shareholders, and your career.

A Beginners Guide To The MVP Summit (2024)

This is my updated post on providing information on what the MVP Summit is, what to expect, and some useful tips/tricks in the neighborhood.

This is a big update on a post that I wrote in 2012.

What’s an MVP?

The MVP (Most Valuable Professional) award from Microsoft is exactly that – an award for expert community services relevant to products or services that Microsoft offers.

Microsoft used to describe MVPs as:

MVPs are independent experts who are offered a close connection with people at Microsoft. To acknowledge MVPs’ leadership and provide a platform to help support their efforts, Microsoft often gives MVPs early access to Microsoft products, as well as the opportunity to pass on their highly targeted feedback and recommendations about product design, development, and support.

Now the description has changed a little but the spirit is still the same:

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products, and solutions, to solve real world problems.

There are thousands of MVPs from ~90 countries/regions around the world. Allegedly, there are 4,000 MVPs, but I think that number might be a little lower.

Each MVP is an expert in one (some have a few awards) are. I am an Azure MVP and most of my contact is with other Azure MVPs and the program managers for Azure technologies that I am interested in.

To achieve MVP status, one has to be nominated by an MVP or a Microsoft employee. A review process runs monthly to see if the candidate has 1 year of expert/relevant community (not as part of their employment) contributions and falls into the top X% of their award category. If they do, they are contacted and asked to sign an NDA. They are now an MVP until their next annual review when the process repeats – that starts in March and the renewal notification is in early July.

The NDA is a big deal – MVPs are getting behind the curtain and learning a lot of things that are not for the public: how things are done, getting into strategy discussions, hearing about things that aren’t even at private preview stage, and so on. People do breach their NDA, and they get kicked out. There is also an expected behavior, which also leads to some ejections.

MVP Summit

There are lots of benefits to being an MVP, but the MVP Summit is the crown jewel. Once per year, there have been exceptions to this, MVPs are invited to Redmond, Microsoft’s global HQ, to meet with the program managers (PMs) of the various Microsoft products and services. This is a conference where pretty much all of the content is under NDA.

This event is a big deal for everyone. MVPs want to go to mingle and learn about new things. Microsoft schedules the Summit for a sweet spot in their sprint planning cadence so that they can get feedback from MVPs on what they are planning.

There are lots of different kinds of sessions:

  • Educational: Deep dives on how something works.
  • Futures briefing: Here’s what we are planning on working on/releasing.
  • Discussion: We want feedback.
  • Leadership sessions: Various leaders – no names here but it’s not hard to guess – host sessions to discuss their strategies, their year, or answer questions.

Outside of this, there is formal and informal track sessions:

  • Tours: Go somewhere that the public never gets to see or hear about.
  • Hands-on labs: A chance to learn directly from the team that created something.
  • Privately organised meetings: Arrange private meetings with PMs to discuss relevant topics.

You can see how the MVP Summit is a conference like no other Microsoft conference!

Location

The MVP Summit is held in Redmond, a city east of Seattle that is dominated by the Microsoft campus.

Most MVPs opt to stay in the nearby city of Bellevue, which is between Seattle and Redmond. Bellevue is a lovely town with a main street featuring the Bellevue mall. On my first MVP Summit, around 15 years ago, most of us stayed in one of the few hotels in the area, such as the Red Lion, Hyatt, Westin or Hilton. Every time I return, a new set of tower cranes is erecting a tall building that develops into yet another hotel.

The accommodation system has changed, so MVPs book wherever they want. Some will organise before booking, and their cliques will book in the same hotel or nearby hotels. Years ago, Most Hyper-V and System Center MVPs would opt for the centrally located Hyatt. The Exchange MVPs were often to be found in The Westin, across the road from the mall and connected by a bridge to the Hyatt. A group of Germans would have stayed in the Red Lion which is a bit more remote but much more affordable. A few will choose to stay in a hotel closer to the Redmond campus but there’s little to do there and they are far from all the nighttime activities. These days, you’ll find us anywhere and everywhere, but still mostly grouped.

Transport plays a role in location. We used to have organised buses to ferry us to and from the campus. That is no longer the case so people are choosing their location based on public transport or car park availability/price.

The Campus

You will find Microsoft offices all around the greater Seattle area: Seattle, Bellevue, and of course, in Redmond where the Summit is located.

Microsoft Redmond Main Campus Map & Buildings https://campusbuilding.com/

The campus is huge, spanning ~100 buildings connected by roads with Microsoft-owned taxis and bus services. Cara parks are scattered all around the tree-lined streets and parks. You never can see too much from one location – a mix of parks, recreation areas, trees, and buildings always block your view. The buildings aren’t very tall, but they can go on quite a bit. When I was there last year, a light rail system was still being constructed to connect Redmond with downtown Seattle, to avoid the peak time traffic which barely moves on the highways – keep in mind that Boeing HQ/factory and Amazon HQ are nearby, along with lots of other big companies.

On a good moment a bus or car journey from downtown Bellevue should be between 15-20 minutes to your destination in Redmond. But traffic, especially in the afternoon and evening is pretty awful heading back south.

The campus is a mish mash of all kinds of buildings. You’ll find older buildings that date back decades. Microsoft has been itterating through these buildings, either renovating them or knocking them down and replacing them for the past few years. Each building is a self-contained unit with offices, meeting rooms/areas, kitchens, canteens, and shared parking areas. You can only gain entry to a building if you are expected or invited – so don’t bother getting into trouble.

The Summit is spread across many buildings. Building 33 is the conference centre and the bigger groups can be found there. But other groups can be anywhere around the campus. You can walk from building to building – there are plenty of footpaths if the weather is OK. This being the Pacific Northwest, rain is never far away. In that case, there are shuttles for certain routes, normally based out of Building 33. If you have a special destination that you need to get to and walking is not an option, then you can ask at reception for a taxi and a Microsoft car or bus will collect you at the door.

If there are any questions, one of the staff (traditionally wearing a purple top) will be ready to help. The folks here are probably doing one event after another all year around and know what they are doing.

One of the nice perks is a trip to The Commons and building 92. Here there are two cool things to visit.

  1. The Company Store: We have normally been granted a voucher to allow us to spend up to $200 (of our own money) in the Company store. You can find various hard to find bits and bobs, such as bags and clothing, that are sold at retail prices. But the real finds are the Microsoft accessories and software which are sold at cost price. Imagine getting several years of Microsoft 365 Home for less than the price of 1 year? Or Xbox GamePass? Make sure you talk to a staff member before purchasing if you are not a USA resident because activation will not work without doing some special stuff.
  2. The museum: You can walk through the history of Microsoft from day 1.

By the way, not far from Building 33 are Buildings 16 and 17. They share a courtyard that is a literal walk through the release history of Microsoft right through its early years.

Activities

A lot is going on during the MVP Summit. Imagine a conference that has a select few attendees, many of which get to know each other over the years. Even if you are a newbie, you probably know some of the others through come kind of community activities. It is rare that an MVP attends the Summit and doesn’t know anyone.

A lot of evening activities are arranged:

  • Microsoft will run receptions: These are normally directly after the last session in one (or several) campus building(s) and can last 1 hour or into the night. Food and drink is provided.
  • Sponsors: Some community groups or companies that want to get to know the MVPs can arrange a party in one of the local bars.
  • Informal: Friends will get together and arrange something – dinner, drinks, karting, whatever.

Outside of the Summit itself, you have Seattle and the surrounds to explore:

  • An outlet mall in Tualip about 45 minutes north of Bellevue. You would be amazed how many MVPs will be there the day before the Summit.
  • Downtown Seattle with the Pike Place Market, Space Needle, or a visit to the home of the Seattle Seahawks Cheathawks (Go Niners!).
  • Oodles of outdoor opportunities lime Mount St. Helens, Olympia National Park, or the Cascades (listen out for the dueling banjos).
  • The Boeing museum.
  • Shopping at the big stores like Wallmart and Target – fun for us from outside of North America.
  • The Bellevue mall with lots of shops (Apple), bars, and restaurants.
The view from the Seattle Space Needle

Eating and Drinking

Breakfast, snacks, and lunch are provided at the MVP Summit. When there is a reception, there are usually some light eating options. Coffee, tea, and bottled/canned drinks are everywhere (and free) in the buildings. So do not waste money in your hotel!

The question is where do you go before/after the Summit? That will be based on your hotel location, but many attendees opt for Bellevue so here are a few options.

Eating:

  • The Cheesecake Factory at the mall is a very popular option. Yes, it serves meals, not just dessert. Don’t get me wrong, the cake is amazing but only those of you with something wrong with you will have room for it after a meal. Do not have a starter/appetizer, because the meals are HUGE. Go straight to the main course. And you can, if you wish, get an amazing slice of cheesecake to eat in or takeaway in a handly container.
  • Palminos: It’s near the Westin, across the road from the Cheesecake Factory. I used to get breakfast there but it was already expensive before prices went up.
  • Dennys: If you want the American eat-till-you-collapse experience, then this is it. You will need to travel, but there is nothing like it. You will not eat again until late that night.
  • Fish, Chinese & Mexican: There are a a few options beside the Cheescake Factory, but it’s years since I went to any of them and really don’t recollect them.

On the bar side of things, there are few places around central Bellevue. Some of the MVPs feel like they must go to Joey’s beside the Hyatt. Personally, I think it’s an overpriced dump filled with posers and serving undersized drinks. The only good thing about the place is the car park out front, facing the Hyatt main door, where you’ll see a few cool cars. Otherwise, just keep walking. You’ll find a few bars on the walk down from the Hyatt (or up from the Westin) on the way to the mall. One spot, which is upstairs from the Cheesecake Factory, has become popular in recent years with MVPs.

Beyond central Bellevue, there are lots of other eating and drinking options. Washington State has a big craft beer thing going on, so it is worth wandering. You’ll also find other chain restaurants if you want to hop in car/taxi/Uber.

Travel

The local airport is SeaTac (SEA) Seattle-Tacoma. For the inbound trip, you should know that:

  • Baggage can take an eternity to arrive. I prefer to travel light with a carry-on bag. If I intend to come home with more stuff, then I bring a collapsable check-in bag. That way I can get out as quickly as possible.
  • The car rental station is in Canada. OK, that’s a stretch, but so is the shuttle journey.
  • The taxi and car-share pickups are across the road in the car park – follow the confusing signs.

For departure, you should know:

  • Traffic to SeaTax from Redmond is dreadful from early afternoon onwards. Expect delays and plan accordingly.
  • This is a typical old, dreaful, American airport. Check-in is cramped, and the queues for security can take well over an hour.
  • SeaTax does operate a priority queue system called Spot Saver.
  • There are limited/no dining options depending on the terminal that you are in. If you have time, then check out the central options before you head to the gate area. Or preferably, eat before you leave for the airport.

For those of you travelling from Europe:

  • Flights from Heathrow and Frankfurt are pretty quick at getting to Seattle. I’ve done the Dublin – Heathrow – Seattle route and its amazing how much quick the BA flight is.
  • There are two downsides to Frankfurt and Heathrow. I would need to allow 3 hours of a transfer in Heathrow. I used to be OK with short transfers in Frankfurt but I have heard bad things in recent years. And then, of course, there is immigration that you have to do in Seattle.
  • If you are doing a European transfer then consider a flight to Dublin and then the direct Dublin to Seattle route with Aer Lingus. Direct flights from Ireland to the USA can do immigration in Dublin Airport. It usually takes no more than 15 minutes and I’ve been second in line more often than not when I walked into the hall. After immigration there are few eating/dining options so take advantage of the main section in Terminal 2 first or splash out a few quid on the Lounge after immigration.

Many attendees will fly in on the Saturday before the Summit for two reasons:

  • Jetlag: UK/Ireland folks will have an 8 hour time differnce and be waking up at 2am for the first few days. It’s worse for folks east of here.
  • Unexpected content: When the Summit is announced, there are usually 3 days in the schedule with no agenda. You have to book flights early to get good prices. But if you fly in just the day before then you can end up missing out on content that the Summit or product groups add.
  • Tourism/shopping/meeting up: There’s a chance to go an do some stuff while you are dealing with jetlag.

Similarly, those who don’t have/feel pressure to get home often stay an extra day or two. It is not uncommon for the Summit or product groups to tack on content, such as a bootcamp on some hot button topics.

NDA

I did mention the NDA? Every session starts with a slide that says something along the lines of:

No photos

No social media posts

No recording

And guess what? Every year, we’ll see someone sticking their phone up, pointing their Surface, or whatever. And there are always stories of someone being ejected from the Summit and the program. The product groups are putting a lot of trust in attendees and some idiots just don’t listen.

Microsoft also expects similar attention to the code of conduct. You have people of all types from all around the world presenting and attending. The last thing you need is racism or some other kind stupidity. Respect is due to everyone.

What I Get From Summit

The MVP Summit is a “work” highlight for me every year. Obviously, I enjoy going for the NDA content. But it’s so much more.

I’ve been hanging out with the same small group of people for 10+ years now. In the last couple of years, that has expanded to include more people. The funny thing is that I work with one, live 20 minutes from another, and used to work with another. And I only ever see them in person at events like this! I have also made friends from around the world, that I also only ever get to meet at community events or The Summit – the summit is the guts of a week so I see them a bit more.

The sessions are a mixed bag. Some, like all events, can be rubbish repeats from Ignite, but feedback over the years has tweaked sessions. I know that some PMs even reach out in advance to get advise on what attendees are expecting. For the most part, the PMs stick to current/future stuff or bring a requested deep dive on something that is confusing the community. And sometimes, you learn how the sausage is made – those sessions provide an incredible value – some Hyper-V content from 12 years ago is still paying off in Azure (based on Hyper-V).

Some of the real value happens outside of the sessions. Sometimes a PM is lurking in the back and paying attention to questions. I’ve had PMs heatseek me after I’ve asked questions or given feedback – leading to follow up chats in the hallway, hastilly booked meetings, or follow up teams calls. When I was a Hyper-V MVP, I got to participate in some “spring planning” meetings with a small group of MVPs and PMs for Windows Server. There’s one popular feature that was added in Windows Server 2016 I distinctly remember describing how I wanted it to work – and that’s how it was released 🙂 Things like this are possible at the Summit because both the community experts and the PMs that help design the features are there and are intersted.

There is also a … I don’t know how to put this in words, but a sense of direction that you pick up at The Summit. The timing of The Summit (let’s forget the COVID years) is right when new ideas are swirling around all of Microsoft. You get a sense of these and shifts before the big public push, which may eventually appear at Build or Ignite later in the year. Sometimes it’s more subtle, and is never formally announced – it just gradually happens but you know about it.

Azure & Oracle Cloud Interconnect

This post will explain how you can connect your Azure network(s) with Oracle Cloud Infrastructure (OCI) via the Oracle Cloud Interconnect.

Background

Many mid-large organisations run applications that are based on Oracle software. When these organisations move to the cloud, they may choose to use Oracle Cloud for their Oracle workloads and Azure for everything else.

But that raises some interesting questions:

  1. How do we connect Azure workloads to Oracle workloads?
  2. If Oracle is hosting data services, how do we minimise latency?

The answer is: The Oracle Cloud Interconnect (OCI).

Azure ExpressRoute and Oracle FastConnect

Microsoft and Oracle are inter-connected via their respective private “site-to-site” connection mechanisms:

  • Azure: ExpressRoute
  • Oracle: FastConnect

This is achieved by both service providers sharing a “meet me” location where each cloud’s edge networks allow a “cross-connection”. So, there is no need to contact an ISP to lease an ExpressRoute circuit. The circuit already exists. There is no need to sign a circuit contract. The ISP is “Oracle” and you pay for the usage of it – in the case of Azure by paying for the ExpressRoute circuit Azure resource.

Location, Location, Location

The inter-connect mechanism is obviously play a role in where you can deploy your ExpressRoute Circuit and FastConnect resource. But performance also comes into play here – latency must be kept to a minimum. As a result, there is a support restriction on which Azure/Oracle regions can be inter-connected and where the circuit must be terminated.

At the time of writing, the below list was published by Microsoft:

What does this?

Let’s imagine that we are using OCI Amsterdam. If we want to connect Azure to it then we must use Azure West Europe.

Now, what about keeping that latency low? The trick there is in selecting a Peering Location that is closeby. Note that the Oracle docs do a better job at defining the Azure peering location (see under Availability).

In my scenario, the peering location would be Amsterdam2. According to Microsoft:

Connectivity is only possible where an Azure ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect.

That means you must always keep the following close to be able to use this solution:

  • The Oracle Cloud Infrastructure region
  • The Azure region
  • The peering location of the ExpressRoute circuit & FastConnect circuit

Configuring ExpressRoute

You have few options to decide between. The first is the SKU of ExpressRoute that you will choose.

Type

Billing

Connections

Local

Unlimited

1 or 2 Azure regions in the same metro as the peering location.

Standard

Metered or Unlimited

Up to 10 connection in the same geo zone as the peering location.

You also have to choose one of the supported speeds for this solution: 1, 2, 5, or 10 Gbps.

The ISP will be  Oracle Cloud FastConnect.

So do you choose Local or Standard? I think that really comes down to balancing the cost. Local has unlimited data transfer but it is billed based on bandwidth. The entry cost per month in Zone 1 is €1,111.27/month with 1 Gbps and unlimited data transfer.

The entry point for a Standard metered plan is €403.76/month. That is €707.51 cheaper than the Local SKU but that savings has to cover your outbound data transfer cost in Azure. At €0.024/GB, that leaves you with (707.51/0.024) 29,479 GB of outbound data transfer per month until the Local SKU is more affordable.

The safe tip here is choose Local, monitor data usage, and consider jumping to Standard if you are using a small enough amount of outbound data transfer to make the metered Standard SKU more affordable.

Note that you can upgrade from Local but you cannot downgrade to Local.

Getting Connected (From Azure)

I’ll talk about the Azure side of things because that’s what I know. I will cover a little bit about Oracle, from what I have learned.

You will need an ExpressRoute Gateway in the selected Azure region. Then you will create an ExpressRoute Circuit in the same region:

  • Your chosen SKU/billing model.
  • The speed from 1, 2, 5, or 10 Gbps.
  • The Provider is Oracle Cloud FastConnect.
  • The peering location from the Oracle docs.

Retrieve the service key and then continue the process in the OCI portal. There is one screen that is very confusing: configuring the BGP addresses.

You are going to need two /30 prefixes that are not used in your OCI/Azure networks. I’m going to use 192.168.0.0/30 and 192.168.0.4/32 for my example. You need two prefixes because Azure and Oracle are running highly available resources under the covers. The ExpressRoute Gateway is two active/active compute instances. Each will require an IP address to advertise/receive addresses prefixes via BGP from the OCI gateway, and vice versa.

What addresses do you need? Oracle requires you to enter:

  • Customer (Azure) BGP IP Address 1
  • Oracle BGP IP Address 1
  • Customer (Azure) BGP IP Address 2
  • Oracle BGP IP Address 2

Here’s how you calculate them:

  • Customer (Azure) BGP IP Address 1: Usable IP #2 from Prefix 1
  • Oracle BGP IP Address 1: Usable IP #1 from Prefix 1.
  • Customer (Azure) BGP IP Address 2: Usable IP #2 from Prefix 2
  • Oracle BGP IP Address 2: Usable IP #1 from Prefix 1

The below is not the final answer yet! But we’re getting there. That would lead us to caclulating:

  • Customer BGP IP Address 1: 192.168.0.2
  • Oracle BGP IP Address 1: 192.168.0.1
  • Customer BGP IP Address 2: 192.168.0.6
  • Oracle BGP IP Address 2: 192.168.0.5

But the Oracle GUI has an illogical check and will tell you that those addresses are wrong. They are correct – it’s just the Oracle GUI is broken by design! Here is what you need to enter:

  • Customer BGP IP Address 1: 192.168.0.2/30
  • Oracle BGP IP Address 1: 192.168.0.1/30
  • Customer BGP IP Address 2: 192.168.0.6/30
  • Oracle BGP IP Address 2: 192.168.0.5/30

You finish the process and wait a little bit. The ExpressRoute circuit will eventually change status to Provisioned. Now you can create a connection between the circuit and the ExpressRoute Gateway. When I did it, the Private Peering was automatically configured, using 192.168.0.0/30 and 192.168.04/30 as the peering subnets.

Check your ARP records and route tables in the circuit (under Private Peering) and you should see that Oracle has propagated its known addresses to your Azure ExpressRoute Gateway, and on to any subnets that are not blocking propagation from the gateway.

And that’s it!

Other Support Things

The following Oracle services are supported:

  • E-Business Suite
  • JD Edwards EnterpriseOne
  • PeopleSoft
  • Oracle Retail applications
  • Oracle Hyperion Financial Management

Naturally, your OCI and Azure networks must not have overlapping prefixes.

You can do transitive routing. For example, you can route through the interconnect to an Oracle network and then on to a peered Oracle network (a hub and spoke).

You cannot use the interconnect to route to on-premises from Azure or from OCI.

Your Hub VNet Should Have No Compute

This post is going to explain why you should not be putting any compute into your hub VNet.

Background

I was looking at some Azure Landing Zones (reference architectures) from Microsoft before the end of 2023. I was shocked to see compute (VMs) being placed in the hub. Years ago, I learned that putting any kind of compute in the hub eventually leads to issues that are not obvious at first. I would have expected Microsoft to know better.

I posted something on Twitter and LinkedIn. Sure, there were plenty of people that agreed with me. However, there were respondents from Microsoft and elsewhere who didn’t see the problem. I explained it, as best as one could in a limited chat, but either people didn’t see the responses, were lazy, or something else 🙂

I decided to write this post to explain the problems with placing things in a hub.

Problem Summary

There are two issues with placing things in a hub:

  1. Routing complexity: When one expands to more than one hub & spoke (regional footprints), the network requirements for a micro-segmented security model will become complex. Complexity breaks security eventually. Keep it simple, stupid!
  2. “Shared services syndrome”: Once you place any kind of shared service in the hub, someone will start asking about putting web servers, databases, and file shares in the hub. Then why do you have spokes? And then we make problem 1 even worse.

Routing Simplicity

I want to start with the ideal – simplicity. My hub and spoke design is far from unique. It’s actually quite simple – making it easy to understand, troubleshoot and secure.

Simple Hub and Spoke

The hub contains only the minimum required networking items with no compute. The above hub contains:

  • A GatewaySubnet with Azure VPN and/or ExpressRoute gateway(s)
  • An AzureFirewallSubnet for the Azure Firewall
  • An AzureBastionSubnet for Azure Bastion must go in the hub (for routing reasons) in a VNet hub and spoke scenario where the Bastion will be shared.

There is flexibility:

  • NVA router for SD-WAN
  • Azure Route Server
  • Azure Firewall management subnet (for tunneling today)
  • Swap out Azure Firewall for an NVA (yuk!)

The beauty is the simplicity. The routing model controls the micro-segmentation security. Nothing is trusted.

  • Inbound from on-premises: The UDRs in the GatewaySubnet forces traffic through the Azure Firewall to reach the spokes. Have a look at this BGP-powered alternative using Azure Route Server by Jose Moreno.
  • Egress and East-West: Any traffic leaving a spoke must route through the firewall in the hub – including spoke-to-spoke, spoke-to-Internet(Azure), and spoke-to-LAN/WAN. Routes to Internet and on-prem are present/propagated to the AzureFirewall Subnet and any traffic to those destinations is handled by that subnet.

Two routes control everything for any given spoke. Note that traffic inside of a spoke is subject to the default Virtual Network route (direct from A to B via VXLAN).

What happens if I need to scale out to more Azure regions? I’ll drop in another hub & spoke and peer the hubs. My micro-segmentation model states that nothing trusts anything else, so footprint1 does not trust footprint2. To accomplish this we will peer the hub VNets to force traffic to route via the firewalls.

I’ve dropped in another hub & spoke with a different IP range. Footprint 1 was 10.0.0.0/16. The new footprint, Footprint2 is 10.10.0.0/16. Connecting the footprints is easy – you peer the hubs. The two hub VNets can route to each other. There’s no compute or data in the hubs so I don’t need to do any isolation. But I do need spokes in the two footprints to be able to route to each other.

We can enable end-to-end connectivity with one route per hub. A route table is added to the AzureFirewallSubnet. A UDR for the neighbouring footprint is added, with the next hop being the firewall in the neighbour.

For example, in Footprint1, I want to be able to reach the spokes in Footprint2. Footprint2 is 10.10.0.0/16. In the Footprint1 AzureFirewallSubnet, I will add a UDR to 10.10.0.0/16 with the next hop of the Footprint2 firewall, 10.10.1.4. Now, subject to the firewall and NSG routes, Spoke1 in Footprint1 can route to Spoke3 and Spoke4 in Footprint 2 and vice versa. Simple!

Simplicity is the key to security. Nothing breaks this model as long as I keep the hub empty of compute.

Everything in IT is “shared”. That’s why a “server” serves – it shares something, not only to users but to other servers in the same workload and to other workloads. Where do I place that “server”? All “servers” go into a spoke.

In micro-segmentation, there is no difference between the VNets. They’re all isolated. There are no DMZs. There are no secure zones. All VNets are isolated from all other VNets and there is no trust – we assume breach at all times. Welcome to modern network security following the guidance from various national agencies to combat APTs.

By the way, in this case, if I need a DMZ DNS server (not that it makes sense to have one anymore – that’s another post) – it goes into a spoke 🙂

Putting Stuff In The Hub

Now we will start copying what some of those Microsoft ALZs do: we will put some compute into the hubs.

If you inspect the hubs you will find a new subnet of x.x3.0/24 with some VMs in there – some DNS servers 🙂 Good security practice will mandate that I force traffic from 10.0.3.0/24 to route via the two firewalls. That’s easier said than done.

By default, traffic from the subnets in peered VNets will route directly from the source to the destination. Peering expands the VXLAN connections from a single VNet to peered VNets. There is no automated interpretation of intent. We will have to add a route to the compute subnets to state that the next hop to the remote compute subnet is via the local firewall. Then we need a route in the AzureFirewallSubnet to state that the next hop to the remote compute subnet is the remote firewall.

Oh – one more thing – and the diagram does not show this. Each network resource in the spokes now talks to the compute subnet in the local hub directly without going through the firewall – and vice versa. If that central compute is compromised, then the firewall will play no role in isolating the spokes from it or in detecting the spread of the APT. We will need to add routes:

  • Compute subnet: for each spoke, similarly to the AzureGatewaySubnet
  • Spoke subnets: to force traffic to x.x.3.0/24 via the Azure Firewall to avoid asynchronous routing.

Oh – and just one more thing – which is also not in the diagram. Each GatewaySubnet will require a route to the local x.x.3.0/34 to use the Azure Firewall as the next hop. Otherwise, on-premises (where attacks will likely come from) will have free access to the Compute subnet. You’ll have to make sure that routes from the GatewaySubnet propagate to the Compute subnet to void asynchronous routing.

Now let’s scale that out to 3 or 4 footprints. How complex are things getting now? Is there room for mistakes?

Shared Services Syndrome

I saw this happen years ago. Many moons ago, I followed a reference architecture from Microsoft to create the reference network design for my employer. That reference included compute in the hub. It was a very special compute: domain controllers. I could see the logic: these are special machines that every Windows VM will talk to – they go into the hub.

Not long after, we had customers stating that they wanted databases and file serves to go into the hub. They simply followed our logic: domain controllers are shared services and so are the file server and the database. How do you argue against that.

In v2.0 of my design, which quickly followed v1.0, all compute was stripped out of the hub. The argument to put shared services into the hub was gone.

I can imagine the consultants saying “I won’t allow more compute in the hub”. OK, but what happens when you are gone or a less argumentative colleague who is willing to do stuff for the customer takes your place? Have you done your customer a disservice by setting a bad precedent?

Let’s add another subnet into the hub. Let’s add more. Let’s expand the address space of the hub – a colleague showed me a hub design (by a competitor) where the hub address space was expanded 5 times! Imagine how much compute is in that hub. How many routes must you inject to make that network secure? Is that network even secure at all? It would take quite an audit to discover what is going on there.

Keep It Simple, Stupid (KISS)

I am a fan of simplified engineering. When it is simple and easy to understand, then it is easy to maintain and to secure. To often, engineers are too clever. They want to make exceptions and show off how clever they are. KISS is the best approach to engineering – and to security.

Getting Private Endpoints To WORK In The Real World

In this Festive Tech Calendar post, I am going to explain how to get Private Endpoints working in the real world.

Thank you to the team that runs Festive Tech Calendar every year for the work that they do and for raising funds for worthy causes.

Private Endpoints

When The Cloud was first envisioned, it was made a platform that didn’t really take network security seriously. The resources that developers want to use, Platform-as-a-Service (PaaS), were built to only have public endpoints. In the case of Microsoft Azure, if I deploy an App Service Plan, the compute that is provisioned for me shares a public IP address(es) with plans from other tenants. The App Service Plan is accessible directly on the Internet – that’s even true when you enable “firewall rules” in an App Service because those rules only control what HTTP/S requests will be responded to so raw TCP connections (zero day attacks) are still possible.

If I want to protect that App Service Plan I need to make it truly private by connecting it to a virtual network, using a private IP address, and maybe placing a Web Application Firewall in the flow oc the client connection.

The purpose of Private Endpoint is to alter the IP address that is used to connect to a platform resource. The public endpoint is, preferably, disabled for inbound connections and clients are redirected to a private IP address.

When we enable a Private Endpoint for a PaaS resource, a Private Endpoint resource is added and a NIC is created. The NIC is connected to a subnet in a virtual network and obtains or is supplied with an IP address for that subnet. All client connections will be via that private IP address. And this is where it all goes wrong in the real world.

If I browse myapp.azurewebsites.net my PC will resolve that name to the public endpoint IP address – even after I have implemented a Private Endpoint. That means that I have to redirect my client to the new IP address. Nothing on The Internet knows that private IP address mapping. The only way to map the FQDN of the App Service to the private endpoint is to use Private DNS.

You might remember this phrase for troubleshooting on-premises networks: “it’s always DNS”. In Azure, “it’s always routing, then it’s always DNS”, but the DNS part is what we need to figure out, not just for this App Service but for all workloads/resource types.

The Problems

There are three main issues:

  • Microsoft Documentation
  • Developers don’t do infrastructure
  • Who does DNS in The Cloud?

Microsoft Documentation

The documentation for Private Endpoint ranges from excellent to awful. That variance depends on the team/resource type that is covered by the documentation. Each resource team is responsible for their own implementation/documentation. And that means some documentation is good and clear, while some documentation should never have made it past a pull request.

The documentation on how to use Private Endpoint focuses on single workloads. You’ll find the same is true in the certifcation exams on Microsoft networking. In the real world, we have many workloads. Clients need to access those workloads over virtual networks. Those workloads integrate with each other, and that means that they must also resolve each others names. This name resolution must work for resources inside of individual workloads, for workload-to-workload communications, and on-premises clients-to-workload communications. You can eventually figure out how to do this from Microsoft documentation but, in my experience, many organisations give up during this journey and assume that Private Endpoint does not work.

Developers Don’t Do Infrastructure

Imagine asking a developer to figure out virtual networks and subnetting! OK, let’s assume you have reimagined IT processes and structures (like you are supposed to) and have all that figured out.

Now you are going to ask a developer to understand how DNS works. In the real world, most devs know their market verticals, language(s) and (quite complex) IDE toolset, and everything else is not important. I’ve had the pleasure of talking devs through running NSLOOKUP (something we IT pros often consider simple) and I basically ran a mini-class.

Assuming that a dev knows how DNS works and should be architected in The Cloud is a path to failure.

Who Does DNS In The Cloud?

I have lost track of how many cloud jorneys that I have been a part of, either from the start or where I joined a struggling project. A common wish for many of those customers is that they won’t run any virtual machines (some organisations even “ban” VMs) – I usually laugh and promise them some VMs later. Their DNS is usually based on Windows Server/Active Directory and with no VMs in their future, they assume that don’t need any DNS system.

If there is no DNS architecture, then how will a system, such as Private Endpoint, work?

A Working Architecture

I’m going to jump straight to a working archticture. I’ll start with a high-level design and then talk about some of the low-level design options.

This design works. It might not be exactly what you require but simple changes can be made for specific scenarios.

High-Level Design

Private DNS Zones are created for each resource type and service type in that resource that a Private Endpoint is deployed for. Those zones are deployed centrally and are associated with a virtual network/subnet that will be dedicated to a DNS service.

The DNS service of your choice will be deployed to the DNS virtual newtork/subnet. Forwarders wll be configured on that DNS Service to point to the “magic Azure virtual IP address” 168.63.129.16. That is an address that is dedicated to Azure services – if you send DNS requests to it then they will be handled by:

  1. Azure Private DNS zones, looking for a matching zone/record
  2. Azure DNS, which can resolve Azure Public DNS Zones or resolve Internet requests – ah you don’t need proxy DNS servers in a DMZ now because Azure becomes that proxy DNS server!

Depending on the detailed design, your DNS servers can also resolve on-premises records to enable Azure-to-on-premises connections – important for migration windows while services exist in two locations, connections to partners via private connections, and when some services will stay on-premises.

All other virtual networks in your deployment (my design assumes you have a hub & spoke for a mid/large scale deployment) will have custom DNS servers configured to point at the DNS servers in the DNS Workload.

One intersting option here is Azure Firewall in the hub. If you want to enable FQDNs in Network Rules then you will:

  1. Enable DNS Proxy mode in the Azure Firewall.
  2. Configure the DNS server IP addresses in the Azure Firewall.
  3. Use the private IP address of the Azure Firewall (a HA resource type) as your DNS server in the virtual networks.

Low-Level Design

There are different options for your DNS servers:

  1. Azure Private DNS Resolver
  2. Active Directory Domain Services (ADDS) Domain Controllers
  3. Simple DNS Servers

In an ideal world, you would choose Azure Private DNS Resolver. This is a pure PaaS resource that can be managed as code – remember “VMs are banned”. You can forward to Azure Private DNS Zones and forward to on-premises/remote DNS servers. Unfortunately, Azure Private DNS Resolver is a relatively expensive resource and the design and requirements are complex. I haven’t really used Azure Private DNS Resolver in the real world so I cannot comment on compatibility with complex on-premises DNS architectures, but I can imagine there being issues with organisations such as universities where every DNS technology known to mankind since the early 1990’s is probably employed.

Most of the customers that I have worked with have opted to use Domain Controllers (DCs) in Azure as their DNS servers. The DCs store all the on-premises AD-integrated zones and can resolve records independently of on-premises DNS server. The intereface is familiar to Windows admins and easily configured and managed. This increases usability and compatibility. If you choose a modest B-series SKU then the cost will be quite a bit lower than Azure Private DNS Resolver. You’ll also have an ADDS presence in Azure enabling legacy workloads to use their required authenetication/aauthorisation methods.

The third option is to just use either a simple Windows/Linux VM as the DNS server. This is a good choice where ADDS is not required or where Linux DNS is required.

The Private Endpoint

I metioned that a Private Endpoint/NIC combination would be deployed for each resource/service type that requires private connectivity. For example, a Storage Account can have blob, table, queue, web, file, dsf, afs, and disks services. We need to be able to redirect the client to the specific service – that means creating a NDS record in the correct Azure Private DNS Zone, such as privatelink.blob.core.windows.net. Some workloads, such as Cosmos DB, can require multiple DNS records – how do you know what to create?

Luckily, their is a feature in Private Endpoint that handles auto-registration for you:

  • All of the required DNS records are created in the correct DNS zones – you must have the Azure Private DNS Zones deployed beforehand.
  • If your resource changes IP address , the DNS records will be updated automatically.

Sadly, I could not find anydocumentation for this feature while writing this article. However, it’s an easy feature to configure. Open your new Private Endpoint and browse to DNS Configuration. There you can see the required DNS records for this Private Endpoint.

Click Add Configuration and supply the requested information. From now on, that Private Endpoint will handle record registration/updates for you. Nice!

With a central handler for DNS name resolution, on-premises clients have the ability to connect to your Private Endpoints – subject to network security rules. On-premises DNS servers should be configured with conditional forwarders (one for each Private Link Azure Private DNS Zone) to point at your Azure DNS servers – they can point at a Azure Firewall if the previously mentioned DNS options are used.

Some Complexities

Like everything, this design is not perfect. Centralised anything comes with authorisation/governance issues. Anyone deploying a Private Endpoint will require the rights to access the Azure Private DNS Zones/records. In the wrong hands, that could become a ticketing nightmare where simple tasks take 6 weeks – far from the agility that we dream of in The Cloud.

Conclusion

The above design is one that I have been using for years. It ahs evolved a little as new features/resources have been added to Azure but the core design has remained the same. It works and it is scalable. Importantly, once it is built, there is little for the devs to know about – just enable DNS Configuration in the Private Endpoint.

Tweaks can be made. I’ve discussed some DNS server options – some choose to dispense with DNS Servers altogether and use Azure Firewall as the DNS server, which forwards to the default Azure DNS services. On-premises DNS servers can forward to Azure Firewall or to the DNS servers. But the core design remains the same.

The Digital Intern – Early Experience with Microsoft Copilot

I will share my early experiences with Microsoft Copilot, the positives and negatives, clear up some false expectations, and explain why I think of Generative AI as a digital intern.

What is Generative AI?

The name gives it away. Generative AI generates or creates something from other known things. Examples are:

  • DALL-E: Creating images, such as Bing Create
  • Chat GPT: A text-based interface for finding things and generating text, such as the Copilot brand from Microsoft.

Pre-Microsoft

There are lots of brands out there but the one that’s grabbing most of the headlines is Open AI because of ChatGPT, which is only on of their products. Like millions of others, I’ve played with ChatGPT. I’ve used it to create Terraform code. It was “OK” but I found:

  • Some of the code was out of date.
  • The structure wasn’t great.

I had to clean up that code to make it usable. But ChatGPT saved me time. I didn’t have to go googling. I was able to create a baseline and use my knowledge and ability to troubleshoot/edit to make the code usable.

I also “ChatGPTd” myself – don’t do it too often or you’ll go blind! Most of what ChatGPT wrote about me was correct. But there were some factual errors. Apparently, I’ve written two books on Azure. Factcheck: I have not published any books on Azure.

Some of the facts were also out of date. I have been “an Azure MVP for 2 years”. That was probably pulled from some online source. ChatGPT didn’t understand the fact (it’s just a calculated set of numbers) and therefore hadn’t the logic to use “2 years” and the publication date to recalculate – or maybe put a date in brackets with the fact.

Copilot

Microsoft has just launched Microsoft 365 Copilot and there is a lot of hoopla and hype which is helping Microsoft shares, even with a bit of a slump in the stock market in general.

I’ve been playing with it and trying things out. First up was PowerPoint. Yes, I can quickly create a presentation. I can add slides. I can change images. But the logic is limited. For example, I cannot change the theme after creating the slides.

The usual fact-checking issues are there too. I used Copilot to create a presentation for my wife on company X in Ireland. The name of company X is also used by companies in the UK and the USA. Even with precise instructions, Copilot tried to inject facts from the UK/USA companies.

However, Copilot did create a skeleton presentation and that saved some time. I played around with it in Word, and it’ll generate a doc nicely. For example, it will write a sales proposal in the style of Yoda. Copilot in Teams is handy – ask it to summarize a chat that you’ve just been added to. Outlook too does a nice job at drafting an email.

Drafting is a good choice of words. Because the text is often just mumbo jumbo that is nothing to do with your or your organisation. It’s filler. In the end, it’s up to you to put in the real information that you want to push.

Bing Enterprise Chat is an option too. You can go into Bing Chat and select the M365 option. You can interrogate facts from “the graph” and M365. You can ask your agenda for the day.

Don’t ask Copilot to tell you how many vacation days are in your calendar. It will search your chat/email history for discussions of vacation time. It does not look at items in your calendar. It will not do maths – more on this next.

Prompt Engineering

Go into Bing Create and ask it to create an image of a countryside scene. Expand the prompt in different ways:

  • Add a run-down building
  • Change the time of day
  • Alter the viewing point
  • Add a background
  • Place some birds in the sky
  • Add a person into the scene
  • Make the foreground more interesting
  • Change the style of image

The image changes gradually as you expand or change the prompt. This is called prompt engineering. Eventually, the final image is nothing like the first image from the basic prompt. What you ask for changes things. Think of the AI as lacking in the “I” part and be as clear and precise as you can be – like how one might instruct a toddler.

Custom Data

I decided to do a mini-recreation of something that I saw the folks from Prodata do with Power BI years ago for presentations. I downloaded publicly available residential property sale information for the Irish market and supplied it to Copilot.

“Tell me how many properties were sold in Dublin in 2023”. No answer because that information was not in the data. Each property sale including address, county, value, and description was in the data, but the “Y properties were sold” fact was not in the data. One would assume that an artificial intelligence would understand the question and know to list/count the items that match the search filter but that is not what happens.

I also found other logic issues. “What was the most expensive property sold in 2023” resulted in a house in Dublin for €1.55 million. I then asked it to list all houses costing more than €1 million. The €1.55m house was not included. I tried other prompts and then returned to my list question – and I got a different answer!

Don’t ask Copilot to do any maths – it won’t tell you averages, differences or sums – because that information was not in the “table” of supplied data.

Data Preparation

You cannot expect to just throw your data at Copilot and for magic to happen. Copilot needs data to be prepared, especially custom (non-Office) data. It needs to be in consumable chunks. You also need to understand what people might ask for – and include that information in the data.

I’m wandering outside of my expertise now, but let’s take my property example. I wanted to analyze property values, do summations, averages, and comparisons. The act of preparing this data for Copilot needs to do these calculations in advance and include the results in the data that is shared with Copilot.

Thoughts

I am not writing off ChatGPT/Copilot. There are problems but it is still very early days and things will be improved.

Right now, we need to understand what Copilot can do, and what it is good at/not good at, and match it up with what will assist the organization.

The most important thing is how we consider Copilot. The name choice by Microsoft was deliberate. They did not call it “Pilot”.

Generative AI is an assistant. It will handle repetitive tasks based on existing data. It has no intelligence to infer new data. It cannot connect two facts that we know are logically connected but are not written down as connected. And Generative AI makes mistakes.

Microsoft called it Copilot because the pilot is responsible for the plane. The user is the pilot. The intention is that Generative AI handles the dull stuff but we add the creativity (prompt engineering/editing) and fact-checking (review/editing).

If you think about it, Copilot is acting like a Digital Intern. How are interns used? You ask them to do the simple things: get lunch, research X and write a short report, write a draft document, and so on. Does the intern produce the final product for a customer/boss? No. Is the intern responsible for what comes out of your team/department? No.

The intern is fresh out of school and knows almost nothing. They will produce exactly what you tell them – if the prompt is too general they get lost in the possibilities. You take what the intern gives you and review/edit/improve it. Their work saves you time, but your knowledge, expertise, and creativity are still required.

I might sound like a downer – I’m not. I’m just not on board the hype train. I’m saying that the train is useful to get from A to B right now, but the line doesn’t go all the way to Z yet. It is still valuable but you have to understand that value and don’t get lost in the hype and the Hollywood-ing of IT.

Default Outbound Access For VMs In Azure Will Be Retired

Microsoft has announced that the default route, an implicit public IP address, is being deprecated 30 September 2025.

Background

Let’s define “Internet” for the purposes of this post. The Internet includes:

  • The actual Internet.
  • Azure services, such as Azure SQL or Azure’s KMS for Windows VMs, that are shared with a public endpoint (IP address).

We have had ways to access those services, including:

  • Public IP address associated with a NIC of the virtual machine
  • Load Balancer with a public IP address with the virtual machine being a backend
  • A NAT Gateway
  • An appliance, such as a firewall NVA or Azure firewall, being defined as the next hop to Internet prefixes, such as 0.00.0/0

If a virtual machine is deployed without having any of the above, it still needs to reach the Internet to do things like:

  • Activate a Windows license against KVM
  • Download packages for Ubuntu
  • Use Azure services such as Key Vault, My SQL for Azure SQL, or storage accounts (diagnostics settings)

For that reason, all Azure virtual machines are able to reach the Internet using an implied public IP address. This is an address that is randomly assigned to SNAT the connection out from the virtual machine to the Internet. That address:

  • Is random and can change
  • Offers no control or security

Modern Threats

There are two things that we should have been designing networks to stop for years:

  • Malware command and control
  • Data exfiltration

The modern hack is a clever and gradual process. Ransomware is not some dumb bot that gets onto your network and goes wild. Some of the recent variants are manually controlled. The malware gets onto the network and attempts to call home to a “machine” on the Internet. From there, the controllers can explore the network and plan their attack. This is the command and control. This attempt to “call home” should be blocked by network/security designs that block outbound access to the Internet by default, opening only connections that are required for workloads to function.

The controller will discover more vulnerabilities and download more software, taking further advantage of vulnerable network/security designs. Backups are targeted for attack first, data is stolen, and systems are crippled and encrypted.

The data theft, or exfiltration, is to an IP address that a modern network/security design would block.

So you can see, that a network design where an implied public IP address is used is not a good practice. This is a primary consideration for Microsoft in making its decision to end the future use of implied public IP addresses.

What Is Happening?

On September 30th, all future virtual machines will no longer be able to use an implied public IP address. Existing virtual machines will be unaffected – but I want to drill into that because it’s not as simple as one might think.

A virtual machine is a resource in Azure. It’s not some disks. It’s not your concept of “I have something called X” that is a virtual machine. It’s a resource that exists. At some point, that resource might be removed. At that point, the virtual machine no longer exists, even if you recreate it with the exact same disks and name.

So keep in mind:

  • Virtual networks with existing VMs: The existing VMs are unaffected, but new VMs in the VNet will be affected and won’t work.
  • Scale-out: Let’s say you have a big workload with dozens of VMs with no public IP usage. You add more VMs and they don’t work – it’s because they don’t have an implied IP address, unlike their older siblings.
  • Restore from backup: You restore a VM to create a new VM. The new VM will not have an implied public IP address.

Is This a Money Grab?

No, this is not a money grab. This is an attempt by Microsoft to correct a “wrong” (it was done to be helpful to cloud newcomers) that was done in the original design. Some of the mitigations are quite low-cost, even for small businesses. To be honest, what money could be made here is pennies compared to the much bigger money that is made elsewhere by Azure.

The goal here is to:

  • Be secure by default by controlling egress traffic to limit command & control and data exfiltration.
  • Provide more control over egress flows by selecting the appliance/IP address that is used.
  • Enable more visibility over public IP addresses, for example, what public address should I share with a partner for their firewall rules?
  • Drive better networking and security architectures by default.

What Is Your Mitigation?

There are several paths that you can choose.

  1. Assign a public IP address to a virtual machine: This is the lowest cost option but offers no egress security. It can get quite messy if multiple virtual machines require public IP addresses. Rate this as “better than nothing”.
  2. Use a NAT Gateway: This allows a single IP address (or a range from an Azure Public IP Address Prefix) to be shared across an entire subnet. Note that NAT Gateway gets messy if you span availability zones, requiring disruptive VNet and workload redesign. Again this is not a security option.
  3. Use a next hop: You can use an appliance (virtual machine or Marketplace network virtual appliance) or the Azure Firewall as a next hop to the Internet (0.0.0.0/0) or specific Internet IP prefixes. This is a security option – a firewall can block unwanted egress traffic. If you are budget-conscious, then consider Azure Firewall Basic. No matter what firewall/appliance you choose, there will be some subnet/VNet redesign and changes required to routing, which could affect VNet-integrated PaaS services such as API Management Premium.

September 2025 is a long time away. But you have options to consider and potentially some network redesign work to do. Don’t sit around – start working.

In Summary

The implied route to the Internet for Azure VMs will stop being available to new VMs on September 30th, 2025. This is not a money grab – you can choose low-cost options to mitigate the effects if you wish. The hope is that you opt to choose better security, either from Microsoft or a partner. The deadline is a long time away. Do not assume that you are not affected – one day you will expand services or restore a VM from backup and be affected. So get started on your research & planning.

What is a Managed Private Endpoint?

Something new appeared in recent times: the “Managed Private Endpoint”. What the heck is it? Why would I use it? How is it different from a “Private Endpoint”?

Some Background

As you are probably aware, most PaaS services in Azure have a public endpoint by default. So if I use a Storage Account or Azure SQL, they have a public interface. If I have some security or compliance concerns, I can either:

  • Switch to a different resource type to solve the problem
  • Use a Private Endpoint

Private Endpoint is a way to interface with a PaaS resource from a subnet in a virtual network. The resource uses the Private Link service to receive connections and respond – this stateful service does not allow outbound connections providing a form of protection against some data leakage vectors.

Say I want to make a Storage Account only accessible on a VNet. I can set up a Private Endpoint for the particular API that I care about, such as Blob. A Private Endpoint resource is created and a NIC is created. The NIC connects to my designated subnet and uses an IP configuration for that subnet. Name resolution (DNS) is updated and now connections from my VNet(s) will go to the private IP address instead of the public endpoint. To enforce this, I can close down the public endpoint.

The normal process is that this is done from the “target resource”. In the above case, I created the Private Endpoint from the storage account.

Managed Private Endpoint

This is a term I discovered a couple of months ago and, to be honest, it threw me. I had no idea what it was.

So far, Managed Private Endpoints are features of:

The basic concept of a Managed Private Endpoint has not changed. It is used to connect to a PaaS resource, also referred to as the target resource (ah, there’s a clue!) over a private connection.

Microsoft: Azure Data Factory Integration Runtime connecting privately to other PaaS targets

What is different is that you create the Managed Private Endpoint from a client resource. Say, for example, I want Azure Synapse Analytics to connect privately to an Azure Cosmos DB resource. The Synapse Analytics resource doesn’t do normal networking so it needs something different. I can go to the Synapse Analytics resource and create a Managed Private Endpoint to the target Cosmos DB resource. This is a request – because the operator of the Cosmos DB resource must accept the Private Endpoint from their target resource.

Once done, Synapse Analytics will use the private Azure backbone instead of the public network to connect to the Cosmos DB resource.

Managed Virtual Network

Is your head wrecked yet? A Managed Private Endpoint uses a Managed Virtual Network. As I said above, a resource like Synapse Analytics doesn’t do normal networking. But a Managed Private Endpoint is going to require a Virtual Network and a subnet to connect the Managed Private Endpoint and NIC.

These are PaaS resources so the goal is to push IaaS things like networking into the platform to be managed by Microsoft. That’s what happens here. When you want to use a Managed Private Endpoint, a Managed Virtual Network is created for you in the same region as the client resource (Synapse Analytics in my example). That means that data engineers don’t need to worry about VNets, subnets, route tables, peering, and all the stuff when creating integrations.

Azure Infrastructure Announcements – September 2023

September is a month of storms. There appears to have been lots of activity in the Azure cloud last month too. Everyone working on Azure should pay attention to the PAY ATTENTION! section.

PAY ATTENTION!

Default outbound access for VMs in Azure will be retired— transition to a new method of internet access

On 30 September 2025, default outbound access connectivity for virtual machines in Azure will be retired. After this date, all new VMs that require internet access will need to use explicit outbound connectivity methods such as Azure NAT Gateway, Azure Load Balancer outbound rules, or a directly attached Azure public IP address.

There will be more communications on this from Microsoft. But this is more than a “don’t worry about your existing VMs” situation. What happens when you add more VMs to an existing old network? What happens when you do a restore? What happens when you do an Azure Site Recovery failover? Those are all new VMs in old networks and they are affected. Everyone should do some work to see if they are affected and prepare remediations in advance – not on the day when they are stressed out by a restore or a Black Friday expansion.

App Service Environment version 1 and version 2 will be retired on 31 August 2024

After 31 August 2024, App Service Environment v1 and v2 will no longer be supported and these App Service Environments and the applications running on them will be deleted and any application data associated with them will be lost.

Oh yeah, you’d better start working on migrations now.

Azure Kubernetes Service

Application gateway for Containers vs Application Gateway Ingress Controller – What’s changed?

Application Gateway for Containers is a new application (layer 7) load balancing and dynamic traffic management product for workloads running in a Kubernetes cluster. At the time of writing this service is currently in public preview. In this article we will look at the differences between AGIC and Application Gateway for containers and some of the great new features available through this new offering. 

I know little about AKS but this subject seems to have excited some AKS users.

A Bucket Load Of Stuff

Too much for me to get into and I don’t know enough about this stuff:

App Services

Announcing Public Preview of Free Hosting Plan for WordPress on App Service

We announced the General Availability of WordPress on App Service one year ago, in August 2022 with 3 paid hosting plans. We learnt that sometimes you might need to try out the service before you migrate your production applications. So, we are offering you a playground for a limited period – a free hosting plan to and explore and experiment with WordPress on App Service. This will help you understand the offering better before you make a long-term investment.

They really want you to try this out – note that this plan is not for production workloads.

Hybrid

Announcing the General Availability of Jumpstart HCIBox

Almost one year ago the Jumpstart team released the public preview of HCIBox, our self-contained sandbox for exploring Azure Stack HCI capabilities without the need for physical hardware. Feedback from the community has been fantastic, with dozens of feature requests and issues submitted and resolved through our open-source community.

Today, the Jumpstart team is excited to announce the general availability of HCIBox!

It’s one thing to test out the software functionality of Azure Stack HCI. But the reality is that this is a hardware-centric solution and there is no simulating the performance, stability, or operations of something this complex.

Generally Available: Windows Server 2012 and 2012 R2 Extended Security Updates enabled by Azure Arc

Windows Server 2012 and 2012 R2 Extended Security Updates (ESUs) enabled by Azure Arc is now Generally Available. Windows Server 2012 and 2012 R2 are going End of Support on October 10, 2023. With ESUs, customers who are running Windows Server 2012 on-premises or in other clouds can get three more years of critical security updates from Microsoft to protect their End of Life infrastructure.

This is not free. This is tied into the news about Azure Update Manager (below).

Miscellaneous

Detailed CSP to EA Migration guidance and crucial considerations

In this blog, I’ve shared insights drawn from real-world migration experiences. This article can help you meticulously plan your own CSP to EA migration, ensuring a smoother transition while incorporating critical considerations into your migration strategy.

One really wishes that CSP, EA, etc were just differences in billing and not Azure APIs. Changing of billing should be like changing a phone plan.

Top 10 Considerations for running your workload successfully on Azure this Holiday Season

Black Friday, Small Business Saturday and Cyber Monday will test your app’s limits, and so it’s time for your Infrastructure and Application teams to ensure that your platforms delivers when it is needed the most. Be it shopping applications on the web and mobile or payment gateways or banking systems supporting payments or inventory systems or billing systems – anything and everything associated with the shopping season should be prepared to face the load for this holiday season.

The “holiday season” starts earlier every year. Tesco Ireland started in August. Amazon has a Prime Day next Tuesday (October 10). These events test systems harder than ever and monolithic on-prem designs will not handle it. It’s time to get ready – if it’s not already too late!

Ungated Public Preview: Azure API Center

We’re thrilled to share that Azure API Center is now open for everyone to try during our ungated public preview! Azure API Center is a new Azure service that is part of the Azure API Management platform. It is the central hub where you can effortlessly keep track of all your APIs company-wide, making them readily discoverable, reusable, and manageable.

Managing a catalog of APIs could be challenging. Tooling is welcome.

Generally available: Secure critical infrastructure from accidental deletions at scale with Policy

We are thrilled to announce the general availability of DenyAction, a new effect in Azure Policy! With the introduction of Deny Action, policy enforcement now expands into blocking request based on actions to the resource. These deny action policy assignments can safeguard critical infrastructure by blocking unwarranted delete calls.  

Can you believe that Azure was designed deliberately to not have a deny permission? Adding it after is not easy. The idea here is that delete locks on resources/resource groups become too easy to remove – and are frequently removed. Something, like a policy, that is enforced in the API (between you and the resources) is always applied and is not easy to remove and can be easily deployed at scale.

Virtual Machines

Generally available: Azure Premium SSD v2 Disk Storage is now available in more regions

Azure Premium SSD v2 Disk Storage is now available in Australia East, France Central, Norway East and UAE North regions. This next-generation storage solution offers advanced general-purpose block storage with the best price performance, delivering sub-millisecond disk latencies for demanding IO-intensive workloads at a low cost.

Expanded region availability makes this something more interesting. But, Azure Backup support is in very limited preview since the Spring.

Announcing the general availability of new Azure burstable virtual machines

we are announcing the general availability of the latest generations of Azure Burstable virtual machine (VM) series – the new Bsv2, Basv2, and Bpsv2 VMs based on the Intel® Xeon® Platinum 8370C, AMD EPYC™ 7763v, and Ampere® Altra® Arm-based processors respectively. 

Faster and cheaper than the previous editions of B-Series VMs and they include ARM support too. The new virtual machines support all remote disk types such as Standard SSD, Standard HDD, Premium SSD and Ultra Disk storage.

Generally Available: Azure Update Manager

We are pleased to announce that Azure Update Manager, previously known as Update Management Center, is now generally available.

The controversial news is that Arc-managed machines will cost $5/month. I’m still not sold on this solution – it still feels less than legacy solutions like WSUS.

Announcing Public Preview of NVMe-enabled Ebsv5 VMs offering 400K IOPS and 10GBps throughput

Today, we are announcing a Public Preview of accelerated remote storage performance using Azure Premium SSD v2 or Ultra disk and selected sizes within the existing NVMe-enabled Ebsv5 family. The higher storage performance is offered on the E96bsv5 and E112ibsv5 VM sizes and delivers up to 400K IOPS (I/O operations per second) and 10GBps of remote disk storage throughput.

Even the largest SQL VM that I have worked with comes nowhere near these specs. The customer(s) that have justified this investment by Microsoft must be huge.

Azure savings plan for compute: How the benefit is applied

Organizations are benefiting from Azure savings plan for compute to save up to 65% on select compute services – and you could too. By committing to spending a fixed hourly amount for either one year or three years, you can save on plans tailored to your budget needs. But you may wonder how Azure applies this benefit.

It’s simple really. The system looks at your VMs, calculates the theoretical savings, and first applies your discount to the machines where you will save the most money, and then repeats until your discount is used.

General Availability: Share VM images publicly with community gallery – Azure Compute Gallery feature

With community gallery, a new feature of Azure Compute Gallery, you can now easily share your VM images with the wider Azure community. By setting up a ‘community gallery’, you can group your images and make them available to other Azure customers. As a result, any Azure customer can utilize images from the community gallery to create resources such as virtual machines (VMs) and VM scale sets.

This is a cool idea.

Trusted Launch for Azure VMware Solution virtual machines

Azure VMware Solution proudly introduces Public Preview of Trusted Launch for Virtual Machines. This advanced feature comprises Secure Boot, Virtual Trusted Platform Module (vTPM), and Virtualization-based Security (VBS), collectively forming a formidable defense against modern cyber threats.

A feature that was introduced in Windows Server 2016 Hyper-V.

Infrastructure-As-Code

Introduction to Azure DevOps Workload identity federation (OIDC) with Terraform

Workload identity federation is an OpenID Connect implementation for Azure DevOps that allow you to use short-lived credential free authentication to Azure without the need to provision self-hosted agents with managed identity. You configure a trust between your Azure DevOps organisation and an Azure service principal. Azure DevOps then provides a token that can be used to authenticate to the Azure API.

This looks like a more secure way to authenticate your pipelines. No secrets are stored and a trust between your DevOps organasation and Azure enables short-lived authentication with desired access rights/scopes.

Quickstart: Automate an existing load test with CI/CD

In this article, you learn how to automate an existing load test by creating a CI/CD pipeline in Azure Pipelines. Select your test in Azure Load Testing, and directly configure a pipeline in Azure DevOps that triggers your load test with every source code commit. Automate load tests with CI/CD to continuously validate your application performance and stability under load.

This is not something that I have played with but I suspect that you don’t want to do this against production systems!

General Availability: GitHub Advanced Security for Azure DevOps

Starting September 20th, 2023, the core scanning capabilities of GitHub Advanced Security for Azure DevOps can now be self-enabled within Azure DevOps and connect to Microsoft Defender for Cloud. Customers can automate security checks in the developer workflow using:

  • Code Scanning: locates vulnerabilities in source code and provides remediation guidance.
  • Secret Scanning: identifies high-confidence secrets and blocks developers from pushing secrets into code repositories.
  • Dependency Scanning: discovers vulnerabilities with open-source dependencies and automates update alerts for developers.

This seems like a good direction to go but I’m told it’s quite pricey.

Networking

General availability: Sensitive Data Protection for Application Gateway Web Application Firewall

WAF running on Application Gateway now supports sensitive data protection through log scrubbing. When a request matches the criteria of a rule, and triggers a WAF action, that event is captured within the WAF logs. WAF logs are stored as plain text for debuggability, and any matching patterns with sensitive customer data like IP address, passwords, and other personally identifiable information could potentially end up in logs as plain text. To help safeguard this sensitive data, you can now create log scrubbing rules that replace the sensitive data with “******”.

Sounds good to me!

General availability: Gateway Load Balancer IPv6 Support

Azure Gateway Load Balancer now supports IPv6 traffic, enabling you to distribute IPv6 traffic through Gateway Load Balancer before it reaches your dual-stack applications. 

With this support, you can now add IPv6 frontend IP addresses and backend pools to Gateway Load Balancer. This allows you to inspect, protect, or mirror both IPv4 and IPv6 traffic flows using third-party or custom network virtual appliances (NVAs). 

Useful for security architectures where NVAs are being used

Azure Backup

Preview: Cross Region Restore (CRR) for Recovery Services Agent (MARS) using Azure Backup

We are announcing the support of Cross Region Restore for Recovery Services Agent (MARS) using Azure Backup.

This makes sense. Let’s say I back up my on-prem data, located in Virginia, to Azure East US, in Boydton Virginia. And then there’s a disaster in VA that wipes out my office and Azure East US. Now I can restore to a new location from the paired region replica.

Preview: Save Azure Backup Recovery Services Agent (MARS) passphrase to Azure Key Vault

Now, you can save your Azure Recovery Services Agent encryption passphrase in Azure Key Vault directly from the console, making the Recovery Services Agent installation seamless and secure.

This beats the old default option of saving it as a text file on the machine that you were backing up.

General availability: Selective Disk Backup and Restore in Enhanced Policy for Azure VM Backup

We are adding the “Selective Disk Backup and Restore” capability in Enhanced Policy of Azure VM Backup. 

Be careful out there!

Storage

General Availability: Malware Scanning in Defender for Storage

Malware Scanning in Defender for Storage will be generally available September 1, 2023.

Please make sure that you read up on how much this will cost you. The DfC plans changed recently, and the pricing model for Storage plans changed to include this feature.

Azure Monitor

Public preview: Alerts timeline view

Azure Monitor alerts is previewing a new timeline view that simplifies the consumption experience of fired alerts. The new view has the following advantages:

  • Shows fired alerts on a timeline
  • Helps identify co-occurrence of alerts
  • Displays alerts in the context of the resources they fired on
  • Focuses on showing counts of alerts to better understand impact
  • Supports viewing alerts by severity
  • Provides a more intuitive discovery and investigation path

This might be useful if you are getting a lot of alerts.

Azure Virtual Desktop

Announcing general availability of Azure Virtual Desktop Custom Image Templates

Custom image templates allow admins to build a custom “golden image” using the Azure Virtual Desktop management user interface. Leverage a variety of built-in customizations or add your own customization scripts to install applications or configurations.

Why are they not using Azure Image Builder like I do?