[ad_1]

Credits: CC0 Public Domain
The Federal Trade Commission has launched an investigation into ChatGPT maker OpenAI. Potential Violation of Consumer Protection Laws, The FTC sent the company a 20-page demand for information the week of July 10, 2023. The move comes as European regulators started taking actionAnd Congress is working on legislation To regulate the artificial intelligence industry.
The FTC has asked OpenAI to provide details of all complaints the company has received from users.false, misleading, defamatory, or harmful“Statements made by OpenAI, and whether OpenAI engaged in unfair or deceptive practices related to risks of harm to consumers, including reputational damage. The agency asked detailed questions about how OpenAI obtains its data, how it builds its models, How it trains, processes it, uses its mechanisms for human response, risk assessment and mitigation, and privacy protections.
As a researcher of social media and AII recognize the enormous transformative potential of generic AI models, but I believe these systems venture, In particular, in terms of consumer protection, these models can generate errors, exhibit bias, and violate personal data privacy.
hidden power
At the heart of chatbots like ChatGPT and image generation tools like DALL-E lies the power of generative AI models that can create realistic content from text, image, audio and video inputs. These tools can be accessed through a browser or smartphone app.
since There is no predefined use for these AI models, they can be fine-tuned for a wide range of applications in various domains from finance to biology. Models trained on large amounts of data can be adapted to a variety of tasks with little or no coding, and sometimes even simply by describing a task in simple language.
Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets the public does not know the nature of the data used to train them, The ambiguity of the training data and the complexity of the model architecture were GPT-3 Trained on over 175 billion variables or “parameters”Make it difficult for anyone to audit these models. Consequently, it is It is difficult to prove that the damage is caused by the way they are built or trained,
nightmare
In language model AI, hallucination is a confident response Incorrect and does not appear to be fit by the training data of a model, Even some generic AI models were designed to reduce the likelihood of hallucinations raised them,
There is a danger that generative AI models may generate incorrect or misleading information that may ultimately be harmful to users. A study examining ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either Generating citations for non-existent papers or reporting non-existent results, my colleague and me found similar patterns in our investigation.
Such hallucinations can cause real harm when models are used without adequate supervision. For example, chatGPT Falsely claimed that a named professor was accused of sexual harassment, and a radio host has filed a Defamation lawsuit against OpenAI Falsely claimed that ChatGPT had a legal complaint of embezzlement against it.
prejudice and discrimination
Without adequate safeguards or safeguards, generative AI models trained on large amounts of data collected from across the internet can mimic existing social biases. For example, organizations using generic AI models to design recruitment campaigns may inadvertently discriminate against certain groups of people.
When a reporter asked the DALL-E 2 to produce photos of “a technology journalist writing an article about a new AI system that can create remarkable and strange images,” This only generated pictures of men, an ai portrait app displayed many socio-cultural biasesFor example by lightening the skin tone of an actress.
data privacy
Another major concern, especially related to the FTC investigation, is the risk of privacy violations where AI could disclose sensitive or confidential information. A hacker could gain access to sensitive information about people whose data was used to train an AI model.
Researchers warn of risks from manipulation called prompt injection attacks Generative AI can be tricked into giving it information it shouldn’t, “indirect quick injection” attack AI can trick the model With steps like sending someone a calendar invite with instructions to export the recipient’s data to their digital assistant and send it to the hacker.
some solutions
European Commission has published Ethical Guidelines for Trustworthy AI which includes an evaluation checklist for six different aspects of AI systems: human agency and supervenience; technical robustness and security; privacy and data governance; transparency, diversity, non-discrimination and fairness; social and environmental welfare; and accountability.
Better documentation of AI developers’ processes could help uncover potential pitfalls. For example, researchers of algorithmic fairness model cards are proposedwhich are similar to nutritional labels for food. data description And Data SheetWhich characterizes the data set used to train the AI model will play a similar role.
For example, Amazon Web Services introduced AI Service Cards that describe its uses and limitations. some models offer this, The cards describe the model’s capabilities, training data, and intended use.
The FTC’s investigation indicates that this type of disclosure may be a direction that US regulators take. Furthermore, if the FTC finds that OpenAI violated consumer protection laws, it can fine the company or put it under a consent decree.
This article has been republished from Conversation Under Creative Commons Licence. read the original article,
Citation: FTC probe of OpenAI: Consumer protections are an early defense of US AI regulation (2023, July 19) Retrieved from here July 19, 2023
This document is subject to copyright. No part may be reproduced without written permission, except in any fair dealing for the purpose of personal study or research. The content is provided for information purposes only.









