Threat Model offers a free Covid safety list...
"Covid is airborne: it is in the exhaled breath of infected people. Vaccines and treatments are your last lines of defense. Post-infection immunity shortened to 28 days. 1 in 3 infected people are pre-symptomatic or show no symptoms. Long Covid usually comes from reinfections, most often “mild” infections. There is no limit to how many times you can catch Covid-19. The AMA wants people to know that getting reinfected is “akin to playing Russian Roulette.” Rapid tests can miss asymptomatic infections. "
Had a #ThreatModel session with two engineering teams today. A real extensive one, where preparation included a full review of what's already there. A tech stack we haven't touched on at this company yet. A model where I could really build on my past experience, and still felt I worked for way too long. And yet, it paid off. Had an insightful conversation with folks, we all learned from each other, and we paved the way for future small, lean modeling sessions. Huge win! #AppSec #ProdSec
#FediHelp
I need to talk with someone skilled about #threatModel (digital side) specifically about 'downloads' / archiving / wget (mirroring) and online/offline for field activities (logistics / investigation ) and activist groups (water, mud, soil investigation within sampling and DIY analysis & data production)
I need to talk so do not point me any NGOs (I already now them). And I've been there too.
It's about holistic security approach in this very specific nudge.
Downloading things, offline access first, sharing (see Kiwix and kiwix itw at APC.org)
Being up to a mountain or down to a river or sewers system or so.
Or around floods in streets / towns / cities / lands.
Radio (SDR) scanning in the field and emergency data transmission / copy.
If it's not a clear and not understandable claim, I'm so sorry and please feel free to bake he with your asking and thoughts.
Very very important: carbon-mascu-male alpha-stupid-surviving-boyz are not welcome in this discussion and I'm sure you get the point my dear fedizens (no techbro / no cryptobro and more away)
cc @DigiDefenders @rysiek @onepict
@APC
@iffybooks @hackstub @lacontrevoie
Looking at some #AI generated #threatmodel output and it listed stealing a user's credentials and using them in the "Spoofing" category. I was uncertain. Is that spoofing or elevation of privilege. So I wander over to a #microsoft page on #stride.
They say it's spoofing, which is fine. It's reasonable. I don't care as long as we all agree.
But in that table, that's literally the only example of spoofing. There are a LOT of other kinds of things that could be called spoofing. If you're gonna have only one example of spoofing, I don't think stealing credentials is the best example.
Lastly, there's the training data. I work for #AWS (so these are strictly my personal opinions). We are opinionated about the platform. We think that there are things you should do and things you shouldn't. If you have deep knowledge of anything (Microsoft, Google, NodeJS, SAP, whatever) you will have informed opinions.
The threat models that I have seen, that use general purpose models like Claude Sonnet, include advice that I think is stupid because I am opinionated about the platform. There's training data about AWS in the model that was authored by not-AWS. And there's training data in the model that was authored by AWS. The former massively outweighs the latter in a general-purpose, trained-on-the-Internet model.
So internal users (who are expected to do things the AWS way) are getting threats that (a) don't match our way of working, and (b) they can't mitigate anyway. Like I saw an AI-generated threat of brute-forcing a cognito token. While the possiblity of that happening (much like buying a winning lottery ticket) is non-zero, that is not a threat that a software developer can mitigate. There's nothing you can do in your application stack to prevent, detect, or respond to that. You're accepting that risk, like it or not, and I think we're wasting brain cells and disk sectors thinking about it and writing it down.
The other one I hate is when it tells you to encrypt your data at rest in S3. Try not to. There's no action for you to take. The thing you control is which key does it and who can use that key.
So if you have an area of expertise, the majority of the training data in any consumer model is worse than your knowledge. It is going to generate threats and risks that will irritate you.
4/fin
Threat models evolve over time, the same as your software does. Nobody is building a save/load feature into their AI powered threat model. Getting deterministic output from consumer-grade LLMs is not a given. So even if you DO create save/reload capability, it's imperfect.
All the tools I've seen start every session from a blank sheet of paper. So If you're revisiting an app that you threat modeled before, because you want to update your model, you're going to start from scratch.
3/n
Related to this, nobody seems to account for the fact that LLMs bullshit sometimes. If you pin someone down and say "the user of your AI-powered threat modeller: do they know how to do a threat model without AI?" Many people will say "yes." Because to say "no" is to admit that the people will be blindly following LLM output that might be total bullshit.
The goal, however, of many of these systems is to make threat modeling more accessible to people who don't know how to do it. To do that, though, you'd have to be more skeptical about your user, and spend some time educating them. Otherwise, they leave the process no smarter than they began.
Honestly, I think a lot of people think the threat model is going to be done entirely by the AI and they want to build a system where the human just consumes and uses it.
2/n
I have seen a lot of efforts to use an #LLM to create a #ThreatModel. I have some insights.
Attempts at #AI #ThreatModeling tend to do 3 things wrong:
1/n
The #encryption topic in #InstantMesaging is popular again recently. As usual there's a lot of misunderstanding and little discussion of a #ThreatModel when giving recommendations.
If the private key is backed up with Apple or Google from your phone, then your messages may as well not be encrypted I've again seen this indirectly with contacts changing phones and their keys are the same as on their old device. Due to automatic backups I guess.
Doesn't matter if it's #WhatsApp, #Signal or #XMPP
Fediverse. I need your magic. Please tell me your most amusing and wtf #ThreatModel fails.
COVID News Pandemic LongCOVID
Threat Modelling ist hier extrem relevant.
Tails hat ein bestimmtes #ThreatModel
- amnesic
- live
- incognito
Da ist kaum etwas mit Prozessisolierung, wie es #Flatpak und #Bubblejail tun, und #QubesOS meistert
Und dass man damit auf einem beliebigen PC sicher sein kann ist leider auch ein falsches Versprechen. #Coreboot ist essentiell weil es minimal ist. Auf unterster Ebene sollte kaum Code laufen. Intel ME sollte aus sein. #Heads ist auch wichtig.
#IAB #RFC7624 - #Confidentiality in the Face of Pervasive #Surveillance: A #ThreatModel and Problem Statement
Confidentiality in the Face of Pervasive Surveillance: A Threat Model and Problem Statement
#ietf
In the return of #threatmodel thursday, I look at work by @rmogull and Chris Farris in their Universal Cloud Threat Model
https://shostack.org/blog/universal-cloud-threat-model-threat-model-thurs/
friends, rivals, luminaries of #infosec: i had a #threatModel recently involving an #LDAP service and the team has a challenge. they don't have a great way to throttle or limit the volume of requests they answer, and when someone's running a credential stuff against a service there can be as many as 100s of millions of invalid requests over a couple of hours and they just have to soak it up and i don't like seeing that.
obviously we could use a WAF for the web services but what about LDAP?
Playing with phanpy.social, it seems that authorizing new apps to access mastodon doesn't require a two factor auth code.
While I haven't fully threat modeled it (you're already logged into the browser, so someone with browser access may not represent a shift in trust boundary, it feels off.
Does your threat model include your employee's cat?
https://www.theregister.com/2023/10/05/hospital_cat_incident/
#DoINeedAVPN is an #opensource #tool that helps people decide if they need a commercial #VPN
. The tool is designed to #help #users determine whether a VPN is #necessary for their privacy and security needs e.g. #threatmodel
#passkeys #fidokeys #passwords #threatmodel what is the actual difference #fido2 #yubikey
Thinking Differently About Passkeys New Threats Require a New Threat Model