PinnedPublished inSystem Weakness3 Steps to protect yourself from Prompt InjectionHead over to https://defender.safetorun.com to quickstart!May 1, 2024May 1, 2024
4 Essential Authorisation Strategies for Agentic AI | Prompt Shield | AI Application SecurityDec 27, 2024Dec 27, 2024
How to test the safety and security of LLM appsI've written a guide on essentially how to test LLM apps for security and safety. Looking forward to hearing what you think!Dec 19, 2024Dec 19, 2024
Github co-pilot Edits to update multiple filesI was really excited to see the latest feature added to github co-pilot for VSCode albiet in preview.Nov 8, 2024Nov 8, 2024
Genkit (VertexAI) — Crashing with permission error and how to fix itI kept getting an error that looks something like thisOct 5, 2024Oct 5, 2024
A Quieter revolution — subtler ways to use generative AI to change how we do product engineeringA failure of imaginationSep 20, 2024Sep 20, 2024
The best attacks and defences against prompt injectionA framework for evaluation of attacks and defencesMay 10, 2024May 10, 2024