Google has warned its employees not to disclose confidential information or use the code generated by AI technology, Google Bard.
This policy does not seem so much surprising, as the chocolate factory also advises its users not to include any sensitive information in their conversations while using Bard in an updated privacy notice. Other larger companies or films have given warnings to their employees against leaking proprietary documents or code and against using AI Chatbots.
The warning recently given by Google to its employees raises concerns that AI tools built by private concerns cannot be trusted, especially in the case of creators, they cannot use them due to privacy and security risks. Cautioning its employees not to use AI technology to create code or don’t use code generated by AI Undermines Google’s claims that its Chatbot can help developers become more productive.
The Search and add dominator told Reuters about its internal ban, it was introduced because Bard can output “undesired code generations”. Issues can potentially lead to buggy problems or complexity. Bloated software will cost developers more time to fix it than if they don’t use AI to code at all.
A voice recognition software developer acquired by Microsoft has been accused of recording and using people’s voices without their permission in an amended lawsuit filed last week. Three people sued the film and accused it as it is violating the California Invasion of privacy act. It states that businesses cannot wiretap consumer communications or can record people without their explicit written signed consent.
They claim that recording people’s voices exposes them to risks, they could be identified while discussing sensitive information and it means that voices could be cloned to bypass nuance’s security features. If left unchecked, citizens are at higher risk of unknowingly having their voices analyzed and mined for data by third parties to make various determinations about their lifestyle and health. Google does not support the idea of a new federal AI regulatory agency.
Google’s AI subsidiary often called a multi-layered multi-stakeholder approach to AI governance and supported a hub and spoke approach. AI can create unique issues in financial services and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors. It works better than a new regulatory agency implementing upstream rules that are not capable of the diverse contexts in which AI is deployed.
Open AI reportedly warned Microsoft about releasing GPT 4-powered Bing Chatbot too quickly, as it can generate false information and inappropriate language. Bing shocked its users with its creepy tone and sometimes manipulative and threatening behavior when it launched. Later, Microsoft restricted conversations to prevent the Chatbot from going off to rails.
Microsoft has a 49 percent stake in OpenAI and gets to access and deploy this startup technology ahead of rivals. Unlike GPT-3, Microsoft is not having exclusive rights to license GPT-4. At times, things are going to be awkward, OpenAI will often be courting the same clients as Microsoft and other businesses that are directly competing with its investor.
The recent actions and policies of Google and Microsoft increase the challenges associated with AI technology. Google’s decision of warning its employees from using AI-generated codes has raised concerns about the reliability of AI tools. Google Bard is claiming that by using its features developers can enhance their skills and productivity but the warning highlights that there are issues with AI-generated code that can lead to complexity. The developments in Artificial intelligence need careful consideration for privacy security and other ethical issues.