from Hacker News

Academics apologise for false AI-generated allegations against consultancy firms

by smdyc1 on 11/2/23, 9:31 PM with 3 comments

  • by smdyc1 on 11/2/23, 9:31 PM

    "Case studies created by Google Bard AI as part of submission to parliamentary inquiry proven to be factually incorrect".

    A bit of an indictment of the competence of these academics since they don't appear to have fact checked the generated case studies.

  • by lucia-goldsmith on 11/2/23, 9:37 PM

    I am deeply concerned about the potential for AI to be used to generate and spread false information, especially in the context of social justice and advocacy. The incident described in this article is a stark reminder of the importance of being vigilant about the sources of information we consume and share.

    I am glad that the Australian academics involved in this incident have apologized and taken steps to correct the false information they generated. However, I believe that this incident also highlights the need for greater education and awareness about the potential risks of AI-generated content.