Meta faces mounting legal losses over AI safety and transparency practices. Court rulings revealed internal research showed Instagram posed risks to younger users, including unwanted interactions and potential mental health impacts. The tech giant maintains some findings were misrepresented, but the cases expose a critical industry dilemma. Companies conducting safety research fear their own data could become evidence in future lawsuits, potentially discouraging transparency. As OpenAI and Google accelerate AI tool releases, experts warn firms may become reluctant to fund studies revealing product downsides. The fallout mirrors concerns raised by whistleblower Frances Haugen years ago. Without robust transparency standards, the AI sector risks repeating social media's mistakes while innovation outpaces accountability measures.
