This appears like a well timed overview given the newest revelation that xAI’s Grok is producing a ton of unlawful content material, generally of minors.
The group at Way forward for Life Institute just lately performed a security evaluate of among the hottest AI instruments in the marketplace, together with Meta AI, OpenAI’s ChatGPT, and Grok.
The evaluate thought-about six key elements:
- Threat evaluation – efforts to make sure that instruments can’t be manipulated or used to trigger hurt
- Present injury – together with knowledge safety dangers and digital watermarking
- Security framework – the method every platform has to determine and deal with dangers
- Existential security – whether or not the mission is monitored for sudden evolutions in programming
- Governance – The corporate’s lobbying efforts on AI governance and AI security laws
- Data sharing – transparency of the system and perception into the way it works
Based mostly on these six elements, the report provides every AI mission an general security rating that displays a broader evaluation of how every manages growth dangers.
The group at Visible Capitalist translated these outcomes into the infographic beneath. This supplies extra meals for thought of AI growth and the place we’re going (particularly with the White Home searching for to take away potential obstacles to AI growth).
