At this point I have close to a decade of working with Azure and AWS/GCP and I can confidently say Azure is the worst when it comes to security, objectively.
Performance, "I don't like the portal", service and capacity availability, and such complaints are somewhat subjective or fixable but I deeply believe Microsoft is the most insecure of the cloud giants on a measurable level.
Anyone that is serious about security should just avoid Microsoft, this has honestly been the case since the early '00s at the least.
I think it’s not just the security of the platform itself either that’s measurably worse - it’s also way easier to end up with insane security configurations with the hellscape that is Entra. It all just feels like it’s held together with duct tape.
The deep integration with AD (now Entra) was the strongest selling point for Azure, but it’s also by far the biggest issue with the platform IMO.
There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
> There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
For all of AWS's faults, one of the reason I really like them is how consistent everything is. There were so many instances where I could correctly guess the right command for the AWS CLI based on how other services worked, I could never do that with GCP or Azure.
I would love to read an article about how AWS ensures this kind of consistency. Given how Azure and GCP both messed this up, it's clearly not a trivial problem (even though it may seem like one)
They have a governance panel for all AWS services that approves design docs and API contracts (at least this is what I was told by an old manager who worked on AWS back in the day).
there's also a significant amount of automation in place these days to sass you in the right direction, i.e.
* focusing on resources and operations on resources
* using consistent and expected naming schemes, pluralization, etc.
it also helps that the sdks and clis are very raw wrappers around this, such that if you know what it looks like in the sdk then it will look similar in the cli.
Identity management is a mess on Azure! I still cannot understand the difference between app registrations and enterprise applications, and how they tie into service principals.
They also have a lot of different resources, such as Graph API, Entra ID.
Manage identities are simpler, since they are Azure constructions, so they work more or less like a IAM role. But then you try to use them with Entra ID APIs and things fall apart.
My favourite pet peeve is that it uses a bunch of indistinguishable random guids, all of which have two names for no discernible reason whatsoever.
So the doco and the UI ends up littered with things like:
PrincipalId (ClientId)
There’s at least six of those and I honestly can’t remember which pairs with which or what the difference is… which I’m sure is security-critical… somehow.
An App registration is the overall object. Think of it like a class in OOP.
An enterprise app is an instance of an app registration. Think of it like an object in OOP.
For single tenants this might seem confusing, because you have both for a single app.
But if you were to have multi-tenants apps, each tenant would have their own Enterprise App instance, all referencing the same App Registration.
appId is for App Registrations.
objectId is for Enterprise Application Registrations.
clientId will be same as appId. It is used in the context of authentication, where it is the id of the object as client.
As someone who is greatly motivated to moving off Azure (to onprem, not to another cloud), do you know of any good collection of Azure security issues I could use as 'ammunition'? Would be greatly appreciated!
I have some notes somewhere but unfortunately they don't have citations, these are just some of the vulns they've had in the last couple years:
• Storm-0558 Breach (2023): Chinese hackers exploited a leaked signing key from a crash dump to access U.S. government emails, affecting 60,000+ State Department communications
• Azure OpenAI Service Exploitation (2024): Hackers bypassed AI guardrails using stolen credentials to generate illicit content, leading to Microsoft lawsuits against developers in Iran, UK, and Vietnam
• CVE-2025-21415 (CVSS 9.9): Spoofing vulnerability in Azure AI Face Service allowed authentication bypass and privilege escalation
• CVE-2023-36052: Azure CLI logging flaw exposed plaintext credentials in CI/CD pipelines, risking sensitive data leakage
• Azurescape (2022): Container escape vulnerability enabled cross-tenant access in Azure Container Instances, discovered by Palo Alto Networks
• ChaosDB (2022): Wiz researchers exploited CosmosDB’s Jupyter Notebook integration to access thousands of customer databases including Fortune 500 companies
• Executive Account Takeover Campaign (2024): Phishing campaign compromised 500+ executive accounts via Azure collaboration tools with MFA manipulation
If your company or workplace is considering migrating from cloud to on-prem or from one cloud to another, I do this professionally btw, feel free to reach out at this temporary email and we can chat: pale.pearl2178 at fastmail.com (to prevent my real email being scraped from HN).
Security issues/CVEs should never be used as a motivation to get off of a particular platform, otherwise we'd never use Linux, macOS, or Windows (I hope you're a fan of OpenBSD... sometimes).
If these issues remain unfixed after being disclosed, or a pattern of fixes that took much longer than you feel they should have, that's valuable ammunition as it shows the organization isn't responsive to security issues.
I agree you shouldn't write off any platform/software/etc based solely on the number of vulnerabilities. I also agree that how responsive they are to fixing things is a factor to consider. But I think that's only _a_ factor.
Take something like a container escape vulnerability.
We could have Vendor A where they're just running containerd on a bunch of hosts on a single network segment and throwing everyone's containers at it so a container escape vulnerability essentially gets you access to everything any of their customers are running.
Where-as Vendor B segments running containers into VMs, so a container escape vulnerability means you can only access your own data. Not great because if one container is compromised that gives them a path into the rest of your workloads, but at least I know they're maintaining a pretty solid wall between tenants.
Then there's Vendor C that actually runs containers using some micro-VM framework so each container is running fully isolated by a hypervisor with a fully separate emulated network stack, etc so the escape really gets them no more access than they had inside the container.
A pattern of issues like Vendor A is, well, a pattern. A series of issues that show their systems are fundamentally not designed for proper isolation between tenants and are lacking defense-in-depth measures to mitigate the fallout of the inevitable security issues is a very good reason to write off Vendor A regardless of how quickly they respond to the issues.
I'm not going to go back and review all the Azure issues, but my recollection from the few writeups I've read definitely paint a picture of a lot more "Vendor A" type issues than I'd be comfortable with.
All of this presupposes that whatever you implement yourself will be more secure and/or that you have the budget to even begin to approach the same level of security.
I’ve been there, done that, and was amazed how the security aspects only rapidly escalated to many millions of dollars and an ongoing cost also in the million or two range!
Think of this like a CEO: they’re less worried about Chinese hackers and more worried about about insider attacks. They’re much more common and do way more financial damage.
The cloud automatically provides separation of roles because an entirely different vendor is in charge of the lower layers, such as networking and storage.
Do you have any idea how hard it is to prevent a smart sysadmin from simply copying all data to a USB drive and walking out of the building with it?
That’s much harder when everything is on a managed hosting platform and no single person can access all accounts / subscriptions.
They’ve improved a lot, but their Achilles heel used to be that the only way they could achieve more challenging compliance requirements was to have multiple segmented clouds.
With Office 365, for example, they had at least 4 government clouds, some of which used shared infrastructure with Azure commercial, but had different data residency or employee requirements. They have thousands of employees monitored by all of the states as a condition of working on those clouds, for example.
Technical controls are similar, but the weak point are things that can cross cloud boundaries. One of the Chinese breaches of US government systems were caused by a PKI vulnerability that allowed the attacker to pivot from a dev environment to a federal cloud instance.
Not strictly security, but there are several long-standing issues with Azure DevOps build pipelines and Artifacts feeds. Using a private artifact feed in your pipeline inexplicably adds minutes to the amount of time it takes to restore packages. And publishing C# NuGet packages together with the source/symbol files is a poorly supported and poorly documented mess (it doesn't help that NuGet support in the dotnet CLI is missing support for important and long requested features only available by using the full fat NuGet client or MSBuild directly).
Another reason to be worried by Microsoft’s Azure security guidelines which state “Identity is the new perimeter”.
Well, the perimeter is not a gate but a cattle guard, and I am not surprised to see some wolves eating a secret and a cow swaggering into the road.
Azure service APIs have always conflated the principles of “reachability from the public internet” and “anonymous access” into a single concept called “Public Access” which, for Azure KV, has 6 different public/private configuration combinations!
This vulnerability report did not include the Key Vault Networking settings for “Public network access”, so more testing (but not much more) is needed to see if the proxy side door can circumvent a resource ACL or private endpoint or both.
It's not just "identity", but "authorization". Really, what they mean is "defense in depth" minus firewalls (because the "in depth" part makes those less relevant), I think. And... that is a reasonable position... provided you get the "in depth" part right, which includes not having proxies that bypass authorization.
Binary Security found the undocumented APIs for Azure API Connections. In this post we examine the inner workings of the Connections allowing us to escalate privileges and read secrets in backend resources for services ranging from Key Vaults, Storage Blobs, Defender ATP, to Enterprise Jira and SalesForce servers.
I'm no security expert, but this seems like a bad take. How are APIs any less secure than any other form of interacting with a program? Nothing here is really a problem with APIs but rather a problem with access control.
> anyone with Reader permissions on the connection is allowed to arbitrarily call any endpoint on the connection
This is not an API issue... It feels like saying we shouldn't allow users to search a database because they might run a SQL injection to drop all the tables. Searching tables isn't the problem, not sanitizing inputs is. This is more like giving all users on your network sudo access or just doing chmod -R 777 /.
My concern here is that a lot of people have the takeaway that APIs shouldn't be exposed because they create security risks. But that's not true. The API exposure isn't the risk, it is the access control. If you don't have proper access control then it really isn't going to matter if you have an API or not. But then again, we have a long history of not taking fairly basic security seriously and with decades of computing and seeing the results, I really can't figure out why. Sure, security is expensive, but bad security is far more expensive. I guess maybe the issue is I'm not much of a gambler.
I think you misunderstood what was meant by "API connections". In azure, they're an entity that is created to represent connectivity to some external service, usually bundled with credentials and the OpenAPI definition of the downstream service. They let you consume an external service from other azure services without having to worry about things like token refresh. The article goes into better detail on this than I can in a comment.
I did read the article and I'm not sure why this isn't about access control
> it is common to not mark input (and output) as sensitive.
There's 2 solutions to this:
1) Fail open: default setting is that things are not marked as sensitive and an active decision has to be taken to mark sensitive
2) Fail closed: by default things are marked sensitive and action needs to be taken to mark it as non-sensitive
Another way of seeing this is that 2 is the common paradigm of "least privileges." You give users, files, services, whatever the minimal privileges required.
> What I would not expect is that anyone with Reader permissions on the connection is allowed to arbitrarily call any endpoint on the connection:
To me this sounds like doing `chmod -R +r /`. Or as the author puts it
> all Readers on that subscription can call all GET requests defined on the connection.
This is certainly an access control issue. Even if the issue is that Azure doesn't allow for more fine grained access control, it is still access control. So that's what I'm not getting. It is about having the ability to do API calls, to do GET and POST commands, it is about tokens (accounts) having more privileges than they should.
Well at the bottom of the article, they mention that Microsoft first closed the issue as invalid, and on the second attempt they closed it as "cannot be reproduced" (after fixing it).
I've reported a trivial way to infer details about passwords in Windows. (Ctrl-arrow in password fields in Windows 8 jumped by character group even when hidden so if a prefilled password was 123 abc.de it would stop after 3, after space (I think), after c, after dot and finally after e.)
All I got was an email: that is interesting bye bye. But it was fixed in the next patch or the next after I think.
So I didn't care to report the two bigger problems I found with Azure Information Protection [1][2] I thought about reporting them but decided against it.
And I will continue to tell people that I don't care to do free work for MS when they won't even give me a t-shirt, a mug or even acknowledge it.
Maybe if one is a security researcher it can be worth it but if you just find something interesting you'll probably be better rewarded by reddit or HN, yes, the upvotes are worthless but less so than a dismissive email.
[1] one in the downloadable AIP tooling where you can easily smuggle clear text information with rock solid plausible deniability - I found it by accident after having implemented a part of a pipeline in the most obvious way I could think of.
[2]: the second had to do with how one can configure SharePoint to automatically protect files with AIP on download, the only problem being if you logged in using another login sequence (sorry for the lack of details, this was before the pandemic and it was just a small part of what I was working on at the time) SharePoint would conveniently forget all about it despite all efforts by me, the security admin at the company and the expert that Microsoft sent to fix it.
Ha ... ha ... ha ... ha ... did they give you the run around for several months until you dropped the issue? It's actually pretty astounding that they don't get sued for this practice. If a company is paying for support and are given illiterate noobs then that is breach of contract I would think. I would never recommend entering a contract with MSFT, they produce trash products they can't support and are more invested in their Legal team than actual product.
I thought the same when a friend of mine reported something to Apple. I would guess it's SOP at this point across big tech, unless something is too big to ignore.
You might have no idea how expensive providing great support to customers is when you're an vendor like Apple or a Microsoft. It's like backports, which are even more unbelievably expensive still, and those are gone industry-wide for that reason.
Think of the cost of opportunity in having smart, capable, experienced staff doing support or backports instead of actual dev work. (Especially backports, which when they were done frequently they were done precisely because customers are risk-averse, so a great deal more review and testing (with a much larger test matrix) was required for backports, with attendant huge increase in cost.) That cost is enormous. But of course they do need to provide some support, and at some point some really good support for the really serious bugs, and the vendor will in time do it, but first the customer demand and pressure has to build.
I can't speak to Apple, but wrt Microsoft, you're not appreciating just how bad support is (or even the documentation is) and you're not appreciating how much people pay for support on top of the product.
I feel like I know more about M365 than anyone I talk to at MS. That's bad.
Reminds me of an issue I reported years ago to the super-special-premier support my company pays for. I never got to somebody who actually understood the issue but there were several managers who constantly tried to have meetings and close the ticket.
Support orgs love to measure how long it takes to close tickets, but rarely whether the problem was actually resolved, or customer sentiment.
I had a friend who worked for an Cable ISP decades ago in the UK. Management of support management got outsourced to another company, who set aggressive targets for call length. Not average call length, but call length of any call they received. Any call that went over the target was a mark against the support person, and if you got more than a few marks you got a dressing down by the supervisor, a few more after that would get you a written warning, and then a few more would see you fired.
It started out at 15 minutes, and that was okayish. It took about 6 minutes to reboot a cable modem and have it come on-line, and that was done with almost every single support case, and fixed at least half of them.
Then they cut it down to 10 minutes. That was squeezing it a bit. 4 minutes at the most to do all introductions, hear the problem, wait for modem reboot and test things were resolved.
Then they cut it down to 5 minutes. The support folks had literally no choice but to just randomly hang up on people as soon as they got close to 5 minutes, or ask them to do a reboot of the modem and phone back. "Oh, I'm sorry, we must have been randomly disconnected"
The intention of the password entry dots isn’t to prevent folks with unrestricted physical access to the machine from exfiltrating information, it’s to stop it from appearing in screenshares and casual “over the shoulder” observations.
Honestly I’m surprised they even acknowledged that as a bug, given there are many ways to get a whole lot more info than what you demonstrated, for instance the builtin “eye” button that is purpose built to reveal the full password to anyone with physical access to the machine wishing to see it.
Suppose user U has read access to Subscription S, but doesn't have access to keyvault K.
If user U can gain access to keyvault K via this exploit, it is scary.
[Vendors/Contingent staff will often be granted read-level access to a subscription under the assumption that they won't have access to secrets, for example.]
(I'm open to the possibility that I'm misunderstanding the exploit)
The API Connection in the example has permissions to read the secrets from the KeyVault -as per screenshot.
It seems to me the KeyVault secret leak originated when KeyVault K owners gave secret reader permissions to the API Connection. (And I will note that granting permissions in Azure requires Owner role-which way more privileged than the Reader role mentioned in this article.)
[edit - article used Reader role, not Contributor role]
My reading on this is that the Reader must have read access to the API Connection in order to drive the exploit [against a secure resource they lack appropriate access to]. But a user can have Reader rights on the Subscription which does cascade down to all objects, including API Connections.
But also the API connection seems to have secret reader permissions as per screenshot in the article… Giving secret reader permission to another resource seems to be the weak link.
The API Connection in a Logic App contains a secret in order to read/write (depending on permission) a resource. Could be a Key Vault secret, Azure App Service, Exchange Online mailbox, SharePoint Online site..., etc.
The secret typically is a user account (OAuth token), but it could also be an App Id/Secret.
Maybe with enough traction, they'll lose out on huge contracts because of stuff like this. Seems the only way to get stuff fixed is to attach dollars to it.
>The Connector for Key Vaults is maybe the one with the highest impact.
Yeah, no joke. Considering how well protected Azure Key Vaults typically are, and what's in them (secrets, certificates etc) this is huge way to compromise a lot of other things. It's finding the keys to the doors.
At this point I have close to a decade of working with Azure and AWS/GCP and I can confidently say Azure is the worst when it comes to security, objectively.
Performance, "I don't like the portal", service and capacity availability, and such complaints are somewhat subjective or fixable but I deeply believe Microsoft is the most insecure of the cloud giants on a measurable level.
Anyone that is serious about security should just avoid Microsoft, this has honestly been the case since the early '00s at the least.
I think it’s not just the security of the platform itself either that’s measurably worse - it’s also way easier to end up with insane security configurations with the hellscape that is Entra. It all just feels like it’s held together with duct tape.
The deep integration with AD (now Entra) was the strongest selling point for Azure, but it’s also by far the biggest issue with the platform IMO.
There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
> There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
For all of AWS's faults, one of the reason I really like them is how consistent everything is. There were so many instances where I could correctly guess the right command for the AWS CLI based on how other services worked, I could never do that with GCP or Azure.
I would love to read an article about how AWS ensures this kind of consistency. Given how Azure and GCP both messed this up, it's clearly not a trivial problem (even though it may seem like one)
They have a governance panel for all AWS services that approves design docs and API contracts (at least this is what I was told by an old manager who worked on AWS back in the day).
It isn't quite as formal as that, but there is a group of engineers who review new APIs for following AWS-wide standards.
there's also a significant amount of automation in place these days to sass you in the right direction, i.e.
* focusing on resources and operations on resources
* using consistent and expected naming schemes, pluralization, etc.
it also helps that the sdks and clis are very raw wrappers around this, such that if you know what it looks like in the sdk then it will look similar in the cli.
Identity management is a mess on Azure! I still cannot understand the difference between app registrations and enterprise applications, and how they tie into service principals.
They also have a lot of different resources, such as Graph API, Entra ID.
Manage identities are simpler, since they are Azure constructions, so they work more or less like a IAM role. But then you try to use them with Entra ID APIs and things fall apart.
My favourite pet peeve is that it uses a bunch of indistinguishable random guids, all of which have two names for no discernible reason whatsoever.
So the doco and the UI ends up littered with things like:
There’s at least six of those and I honestly can’t remember which pairs with which or what the difference is… which I’m sure is security-critical… somehow.An App registration is the overall object. Think of it like a class in OOP. An enterprise app is an instance of an app registration. Think of it like an object in OOP.
For single tenants this might seem confusing, because you have both for a single app.
But if you were to have multi-tenants apps, each tenant would have their own Enterprise App instance, all referencing the same App Registration.
appId is for App Registrations.
objectId is for Enterprise Application Registrations.
clientId will be same as appId. It is used in the context of authentication, where it is the id of the object as client.
The problem is that those “id” names have nothing to do with what they’re pointing at.
“EnterpriseAppId” and “AppRegistationId” would make sense.
ObjectId is meaningless nonsense. Everything is an object! Everything has an Id! This tells you nothing specific.
As someone who is greatly motivated to moving off Azure (to onprem, not to another cloud), do you know of any good collection of Azure security issues I could use as 'ammunition'? Would be greatly appreciated!
UPD: note to self - this seems like a good resource https://www.cloudvulndb.org/results
I have some notes somewhere but unfortunately they don't have citations, these are just some of the vulns they've had in the last couple years:
• Storm-0558 Breach (2023): Chinese hackers exploited a leaked signing key from a crash dump to access U.S. government emails, affecting 60,000+ State Department communications
• Azure OpenAI Service Exploitation (2024): Hackers bypassed AI guardrails using stolen credentials to generate illicit content, leading to Microsoft lawsuits against developers in Iran, UK, and Vietnam
• CVE-2025-21415 (CVSS 9.9): Spoofing vulnerability in Azure AI Face Service allowed authentication bypass and privilege escalation
• CVE-2023-36052: Azure CLI logging flaw exposed plaintext credentials in CI/CD pipelines, risking sensitive data leakage
• Azurescape (2022): Container escape vulnerability enabled cross-tenant access in Azure Container Instances, discovered by Palo Alto Networks
• ChaosDB (2022): Wiz researchers exploited CosmosDB’s Jupyter Notebook integration to access thousands of customer databases including Fortune 500 companies
• Executive Account Takeover Campaign (2024): Phishing campaign compromised 500+ executive accounts via Azure collaboration tools with MFA manipulation
If your company or workplace is considering migrating from cloud to on-prem or from one cloud to another, I do this professionally btw, feel free to reach out at this temporary email and we can chat: pale.pearl2178 at fastmail.com (to prevent my real email being scraped from HN).
Great, thanks!
For me it's just a distant dream now, but I bet business will be booming for you in the coming years, especially if you're located in Europe ;)
This list of vulns nobody was ever bothered with except for 1 (Storm-0558) doesn't prove your ridiculously sensational comment above
Security issues/CVEs should never be used as a motivation to get off of a particular platform, otherwise we'd never use Linux, macOS, or Windows (I hope you're a fan of OpenBSD... sometimes).
If these issues remain unfixed after being disclosed, or a pattern of fixes that took much longer than you feel they should have, that's valuable ammunition as it shows the organization isn't responsive to security issues.
I agree you shouldn't write off any platform/software/etc based solely on the number of vulnerabilities. I also agree that how responsive they are to fixing things is a factor to consider. But I think that's only _a_ factor.
Take something like a container escape vulnerability.
We could have Vendor A where they're just running containerd on a bunch of hosts on a single network segment and throwing everyone's containers at it so a container escape vulnerability essentially gets you access to everything any of their customers are running.
Where-as Vendor B segments running containers into VMs, so a container escape vulnerability means you can only access your own data. Not great because if one container is compromised that gives them a path into the rest of your workloads, but at least I know they're maintaining a pretty solid wall between tenants.
Then there's Vendor C that actually runs containers using some micro-VM framework so each container is running fully isolated by a hypervisor with a fully separate emulated network stack, etc so the escape really gets them no more access than they had inside the container.
A pattern of issues like Vendor A is, well, a pattern. A series of issues that show their systems are fundamentally not designed for proper isolation between tenants and are lacking defense-in-depth measures to mitigate the fallout of the inevitable security issues is a very good reason to write off Vendor A regardless of how quickly they respond to the issues.
I'm not going to go back and review all the Azure issues, but my recollection from the few writeups I've read definitely paint a picture of a lot more "Vendor A" type issues than I'd be comfortable with.
All of this presupposes that whatever you implement yourself will be more secure and/or that you have the budget to even begin to approach the same level of security.
I’ve been there, done that, and was amazed how the security aspects only rapidly escalated to many millions of dollars and an ongoing cost also in the million or two range!
Think of this like a CEO: they’re less worried about Chinese hackers and more worried about about insider attacks. They’re much more common and do way more financial damage.
The cloud automatically provides separation of roles because an entirely different vendor is in charge of the lower layers, such as networking and storage.
Do you have any idea how hard it is to prevent a smart sysadmin from simply copying all data to a USB drive and walking out of the building with it?
That’s much harder when everything is on a managed hosting platform and no single person can access all accounts / subscriptions.
> All of this presupposes that whatever you implement yourself will be more secure
No, this thread is about Azure in particular having a bad security posture, not the cloud in general.
True, but on-prem is unlikely to be better than even Azure, especially if you use “simple” services such as VMs and the like.
They’ve improved a lot, but their Achilles heel used to be that the only way they could achieve more challenging compliance requirements was to have multiple segmented clouds.
With Office 365, for example, they had at least 4 government clouds, some of which used shared infrastructure with Azure commercial, but had different data residency or employee requirements. They have thousands of employees monitored by all of the states as a condition of working on those clouds, for example.
Technical controls are similar, but the weak point are things that can cross cloud boundaries. One of the Chinese breaches of US government systems were caused by a PKI vulnerability that allowed the attacker to pivot from a dev environment to a federal cloud instance.
Azure requires that you use SHA-1 RSA private keys for initially connecting to VMs.
Not strictly security, but there are several long-standing issues with Azure DevOps build pipelines and Artifacts feeds. Using a private artifact feed in your pipeline inexplicably adds minutes to the amount of time it takes to restore packages. And publishing C# NuGet packages together with the source/symbol files is a poorly supported and poorly documented mess (it doesn't help that NuGet support in the dotnet CLI is missing support for important and long requested features only available by using the full fat NuGet client or MSBuild directly).
We just migrated off Azure after one to many deprecation or downtimes caused by some random new feature or change of how permissions work. We gave up.
Another reason to be worried by Microsoft’s Azure security guidelines which state “Identity is the new perimeter”.
Well, the perimeter is not a gate but a cattle guard, and I am not surprised to see some wolves eating a secret and a cow swaggering into the road.
Azure service APIs have always conflated the principles of “reachability from the public internet” and “anonymous access” into a single concept called “Public Access” which, for Azure KV, has 6 different public/private configuration combinations!
This vulnerability report did not include the Key Vault Networking settings for “Public network access”, so more testing (but not much more) is needed to see if the proxy side door can circumvent a resource ACL or private endpoint or both.
It's not just "identity", but "authorization". Really, what they mean is "defense in depth" minus firewalls (because the "in depth" part makes those less relevant), I think. And... that is a reasonable position... provided you get the "in depth" part right, which includes not having proxies that bypass authorization.
Binary Security found the undocumented APIs for Azure API Connections. In this post we examine the inner workings of the Connections allowing us to escalate privileges and read secrets in backend resources for services ranging from Key Vaults, Storage Blobs, Defender ATP, to Enterprise Jira and SalesForce servers.
My concern here is that a lot of people have the takeaway that APIs shouldn't be exposed because they create security risks. But that's not true. The API exposure isn't the risk, it is the access control. If you don't have proper access control then it really isn't going to matter if you have an API or not. But then again, we have a long history of not taking fairly basic security seriously and with decades of computing and seeing the results, I really can't figure out why. Sure, security is expensive, but bad security is far more expensive. I guess maybe the issue is I'm not much of a gambler.
I think you misunderstood what was meant by "API connections". In azure, they're an entity that is created to represent connectivity to some external service, usually bundled with credentials and the OpenAPI definition of the downstream service. They let you consume an external service from other azure services without having to worry about things like token refresh. The article goes into better detail on this than I can in a comment.
I did read the article and I'm not sure why this isn't about access control
There's 2 solutions to this: Another way of seeing this is that 2 is the common paradigm of "least privileges." You give users, files, services, whatever the minimal privileges required. To me this sounds like doing `chmod -R +r /`. Or as the author puts it This is certainly an access control issue. Even if the issue is that Azure doesn't allow for more fine grained access control, it is still access control. So that's what I'm not getting. It is about having the ability to do API calls, to do GET and POST commands, it is about tokens (accounts) having more privileges than they should.What am I missing here?
That’s a scary vulnerability. There’s no mention of the bug bounty paid out for it but I hope it was substantial.
Well at the bottom of the article, they mention that Microsoft first closed the issue as invalid, and on the second attempt they closed it as "cannot be reproduced" (after fixing it).
So from that I can imply there was no payment.
I've reported a trivial way to infer details about passwords in Windows. (Ctrl-arrow in password fields in Windows 8 jumped by character group even when hidden so if a prefilled password was 123 abc.de it would stop after 3, after space (I think), after c, after dot and finally after e.)
All I got was an email: that is interesting bye bye. But it was fixed in the next patch or the next after I think.
So I didn't care to report the two bigger problems I found with Azure Information Protection [1][2] I thought about reporting them but decided against it.
And I will continue to tell people that I don't care to do free work for MS when they won't even give me a t-shirt, a mug or even acknowledge it.
Maybe if one is a security researcher it can be worth it but if you just find something interesting you'll probably be better rewarded by reddit or HN, yes, the upvotes are worthless but less so than a dismissive email.
[1] one in the downloadable AIP tooling where you can easily smuggle clear text information with rock solid plausible deniability - I found it by accident after having implemented a part of a pipeline in the most obvious way I could think of.
[2]: the second had to do with how one can configure SharePoint to automatically protect files with AIP on download, the only problem being if you logged in using another login sequence (sorry for the lack of details, this was before the pandemic and it was just a small part of what I was working on at the time) SharePoint would conveniently forget all about it despite all efforts by me, the security admin at the company and the expert that Microsoft sent to fix it.
> the expert that Microsoft sent to fix it.
Ha ... ha ... ha ... ha ... did they give you the run around for several months until you dropped the issue? It's actually pretty astounding that they don't get sued for this practice. If a company is paying for support and are given illiterate noobs then that is breach of contract I would think. I would never recommend entering a contract with MSFT, they produce trash products they can't support and are more invested in their Legal team than actual product.
No, as far as I remember it was more like they came, looked at it and either the same day or week just concluded it couldn't be done.
I thought the same when a friend of mine reported something to Apple. I would guess it's SOP at this point across big tech, unless something is too big to ignore.
You might have no idea how expensive providing great support to customers is when you're an vendor like Apple or a Microsoft. It's like backports, which are even more unbelievably expensive still, and those are gone industry-wide for that reason.
Think of the cost of opportunity in having smart, capable, experienced staff doing support or backports instead of actual dev work. (Especially backports, which when they were done frequently they were done precisely because customers are risk-averse, so a great deal more review and testing (with a much larger test matrix) was required for backports, with attendant huge increase in cost.) That cost is enormous. But of course they do need to provide some support, and at some point some really good support for the really serious bugs, and the vendor will in time do it, but first the customer demand and pressure has to build.
I can't speak to Apple, but wrt Microsoft, you're not appreciating just how bad support is (or even the documentation is) and you're not appreciating how much people pay for support on top of the product.
I feel like I know more about M365 than anyone I talk to at MS. That's bad.
Oh trust me, I know how bad it can be. I wouldn't say that I 'appreciate' it though!
Reminds me of an issue I reported years ago to the super-special-premier support my company pays for. I never got to somebody who actually understood the issue but there were several managers who constantly tried to have meetings and close the ticket.
> there were several managers who constantly tried to have meetings and close the ticket.
Managers on the support side or your teams?
Microsoft side. It was pretty clear that they were evaluated by closing tickets quickly.
Support orgs love to measure how long it takes to close tickets, but rarely whether the problem was actually resolved, or customer sentiment.
I had a friend who worked for an Cable ISP decades ago in the UK. Management of support management got outsourced to another company, who set aggressive targets for call length. Not average call length, but call length of any call they received. Any call that went over the target was a mark against the support person, and if you got more than a few marks you got a dressing down by the supervisor, a few more after that would get you a written warning, and then a few more would see you fired.
It started out at 15 minutes, and that was okayish. It took about 6 minutes to reboot a cable modem and have it come on-line, and that was done with almost every single support case, and fixed at least half of them.
Then they cut it down to 10 minutes. That was squeezing it a bit. 4 minutes at the most to do all introductions, hear the problem, wait for modem reboot and test things were resolved.
Then they cut it down to 5 minutes. The support folks had literally no choice but to just randomly hang up on people as soon as they got close to 5 minutes, or ask them to do a reboot of the modem and phone back. "Oh, I'm sorry, we must have been randomly disconnected"
The intention of the password entry dots isn’t to prevent folks with unrestricted physical access to the machine from exfiltrating information, it’s to stop it from appearing in screenshares and casual “over the shoulder” observations.
Honestly I’m surprised they even acknowledged that as a bug, given there are many ways to get a whole lot more info than what you demonstrated, for instance the builtin “eye” button that is purpose built to reveal the full password to anyone with physical access to the machine wishing to see it.
If the eye button is available it is clearly the intention.
This wasn't such a case.
That said, I didn't expect it to get rich, it was just that the experience didn't give me anything back for the effort I put in.
The caller still needs at least the Reader role, so it was limited to accounts that were added to the Azure subscription as only Readers.
I'm glad they fixed it, but this doesn't seem too scary??
Suppose user U has read access to Subscription S, but doesn't have access to keyvault K.
If user U can gain access to keyvault K via this exploit, it is scary.
[Vendors/Contingent staff will often be granted read-level access to a subscription under the assumption that they won't have access to secrets, for example.]
(I'm open to the possibility that I'm misunderstanding the exploit)
The API Connection in the example has permissions to read the secrets from the KeyVault -as per screenshot.
It seems to me the KeyVault secret leak originated when KeyVault K owners gave secret reader permissions to the API Connection. (And I will note that granting permissions in Azure requires Owner role-which way more privileged than the Reader role mentioned in this article.)
[edit - article used Reader role, not Contributor role]
My reading on this is that the Reader must have read access to the API Connection in order to drive the exploit [against a secure resource they lack appropriate access to]. But a user can have Reader rights on the Subscription which does cascade down to all objects, including API Connections.
But also the API connection seems to have secret reader permissions as per screenshot in the article… Giving secret reader permission to another resource seems to be the weak link.
The API Connection in a Logic App contains a secret in order to read/write (depending on permission) a resource. Could be a Key Vault secret, Azure App Service, Exchange Online mailbox, SharePoint Online site..., etc.
The secret typically is a user account (OAuth token), but it could also be an App Id/Secret.
Your take is spot on, sir.
It's a feature not a bug: "Azure’s Security Vulnerabilities Are Out of Control" - https://www.lastweekinaws.com/blog/azures_vulnerabilities_ar...
> Let’s start with some empathy, because let’s face it: Nobody sets out to build something insecure except maybe a cryptocurrency exchange.
:-)
Nobody sets out to build something insecure but if they go with Azure....
"Microsoft confirms partial loss of security log data on multiple platforms" - https://www.cybersecuritydive.com/news/microsoft-loss-securi...
"Microsoft called out for ‘blatantly negligent’ cybersecurity practices" - https://www.theverge.com/2023/8/3/23819237/microsoft-azure-b...
At least this new one seems to have been fixed within two months: 6 Jan to Feb 20th.
So this was vulnerable ? https://azure.microsoft.com/en-us/explore/global-infrastruct...
Maybe with enough traction, they'll lose out on huge contracts because of stuff like this. Seems the only way to get stuff fixed is to attach dollars to it.
Oh a confuse delegate vulnerability. Azure is not the only cloud provider with that oversight, let me tell you.
>The Connector for Key Vaults is maybe the one with the highest impact.
Yeah, no joke. Considering how well protected Azure Key Vaults typically are, and what's in them (secrets, certificates etc) this is huge way to compromise a lot of other things. It's finding the keys to the doors.