Crustacean Trilemma: Agency, Generality, Risk
Over the last few years, I’ve found myself coming back to the same question when thinking about AI systems: what actually makes them dangerous in practice? I was specifically interested in environments with real infrastructure, real incentives, and real failure modes as it pertains to practical AI security leaders