Basically, rotate as soon as you can* and start looking through your AWS logs and setting to see if any services you don't recognize have been spun up. Is you think you have been attacked or see stuff you did not spin up, contact AWS support ASAP!
*Do NOT just revoke keys if it is in a production system where other people are working or are depending on. Talk to your team and figure out what the remediation process is internally and follow that!
If you are working by yourself and no one is relying on services this key is associated with, then yeah, just revoke and replace ASAP.
> In the case of computer engineers maybe they need to be legally responsible directly for this kind of stuff.
Don't you think that management deserves a share of the trophy? If everything that goes wrong with software is blamed on engineers, then what do we need management for?
For example they may have failed to ensure that sufficient resources are available to do code reviews, or they may have cut corners by assigning a project to a junior that should have been assigned to a senior, or by assigning a project to one developer where three are needed, or by setting unrealistic deadlines that put people under unnecessary stress, or by telling them "It's just a quick PoC" and then later "We need that on production yesterday".
Management should, with some exceptions, always be the first ones to point the finger at, but software developers have such little respect in society that even they will throw their colleagues under the bus in this way. The day that a large scale disaster occurs because of a software bug will be the day that we will face greater scrutiny and regulation while the employers get more government funding and corporate welfare to cover their asses while lining their pockets.
Yikes. It is sad to hear stories like that, where security is not a concern until panic sets in. :(
Yet another reason we need to adopt standards like security.txt and make it easy to report these things as it is to tell robots to ignore us with robots.txt. See securitytxt.org for more on the project.
It's tough. I'm our public security reporting email list.
We get a lot of things that boil down to "When I go to your website, I am able to see the content of your html files!" ... yes, reporter. That is what a web server does. It gives you HTML files. Congrats that you have figure out the dev console on your browser, but you're not a hacker. I'm trying to go with Hanlon's razor here and assume this is inexperienced people and not outright scams.
We don't get a lot of these, but they far outweigh actual credible reports. But we try our best and take everything seriously until it can get disproven. And it's exhausting. So I get it sometimes. Sometimes having a place for responsible disclosure just opens yourself up to doing more paperwork (verifying that the fake reports are fake). That said, we still do it.
> Sometimes having a place for responsible disclosure just opens yourself up to doing more paperwork
100% this. And it bites harder when you’re a scrappy time constrained startup, or just offering a public service.
I maintain a public API that returns public information- observable facts about the world. As such, the API doesn’t have any authn/z. Anyone can use it as little or as much as they want, free of charge.
Of course I get at least 1 email per year telling me my API is insecure and that I should really set up some OAuth JWT tokens and blah blah blah.
I used to reply telling them they are wrong but it gets hostile because they want money for finding the “vulnerability”.
On the flip side, at another company I once got a security@ email that sounded like a false alarm. I quickly wrote it off and sent a templates response. Then they came back with screenshots of things that shocked me. It was not a false alarm. That guy got paid a handsome sum and an apology from me for writing him off.
Or this! It's not just paperwork, but also mental capacity. Having a place for responsible disclosure yields enough "fake" disclosures that you become desensitized to it. Boy who cried wolf style.
It's possible "security isn't a concern" because they are dismissing the report, not the security.
I think the fundamental problem is, a lot of orgs just don't care about security, as it doesn't affect their bottom-line. Even breaches are only a temporary hit on the PR. Proper way to address that might just be legislation, with heavy fines based on total revenue.
That and also security is just hard to scale. That's why if it was mandated by legislation, companies would be forced to spend a comparable amount on scaling their security teams and efforts.
Most respectable services will have an abuse@ address you can contact. They should at least be able to get your issues where they need to go internally. I've had very good results for companies and networks in the US.
Toyota is claiming no, not with this leak. It was a partial repo that was exposed. The data they accessed with the key got customer ID numbers and emails only.
GitGuardian's public report on secrets sprawl talks about their methodology of scanning any commit https://www.gitguardian.com/state-of-secrets-sprawl-report-2...