The Australian federal government has expressed concerns about the high-risk applications of AI and its potential impact on privacy. Growing public mistrust of Artificial Intelligence (AI) has prompted the government to take action and regulate the technology. This includes asking tech companies to watermark or indicate when content has been created by AI. An inquiry into safe and responsible AI received more than 500 submissions, leading Industry and Science Minister Ed Husic to advocate for stricter regulation of high-risk AI systems, such as those used to predict recidivism or determine job suitability.
Many tech giants and organizations, including Google, Meta, banks, supermarkets, legal bodies, and universities, submitted to the inquiry. There are concerns about mounting socioeconomic inequality and negative outcomes due to poor data. While the government acknowledges that adopting AI and automation could boost the country’s GDP, it aims to strike a balance between encouraging innovation and addressing public safety concerns.
A 25-page report, in response to the inquiry, cited surveys showing that only a third of Australians believe there are adequate safeguards around the design and development of AI. The government plans to set up an expert advisory group, establish a voluntary AI safety standard, and consult with the tech industry on transparency measures. Mandatory safeguards being considered include pre-deployment risk and harm prevention testing for new AI products, training standards for software developers, and exploring issues related to deepfakes and copyright infringement caused by AI technology.