Gov. Newsom signs executive order to study use of AI

Newsom recently signed an executive order addressing the increasing use of artificial intelligence by studying the development, use and risks of artificial technology in California. Photo courtesy of Unsplash

On Sept. 6, Gov. Gavin Newsom signed an executive order to prepare the state of California for the increasing rise in artificial intelligence (AI).

“This is potentially transformative technology — comparable to the advent of the internet — and we’re only scratching the surface of understanding what GenAI (generative AI) is capable of. We recognize both the potential benefits and risks these tools enable. We’re neither frozen by the fears nor hypnotized by the upside. We’re taking a clear-eyed, humble approach to this world-changing technology,” Newsom wrote in a statement.

The executive order includes actions such as a risk-analysis report, state employee training and a beneficial use of GenAI report for state agencies and departments to assess the use of AI within state government. The report would require state agencies and departments to examine the most significant and beneficial uses of GenAI within California and the potential risks associated with AI.

Although the use of AI has been making progress within state agencies, departments and communities, it has also been integrated in education.

With the increasing accessibility and use of AI in schools, professors and students share their thoughts pertaining to the benefits and drawbacks of AI in education.

Samantha Dressel, an assistant professor of English at Chapman University, expressed both the benefits and detrimental effects of AI.

“I think that it has potential for both, because I think the potential downfall is just people using it thoughtlessly generating papers, and it’s just another cheating mechanism,” said Dressel, a faculty senate director of communications. “But, I think there is a lot of potential for using it and using it as a resource, using it essentially as a research assistance, using it as a way to generate ideas. If you’re using GenAI for any kind of project, it should be equally as much work as writing something from scratch; it’s just a different type of work.”

I think that it has potential for both, because I think the potential downfall is just people using it thoughtlessly generating papers, and it’s just another cheating mechanism. But, I think there is a lot of potential for using it and using it as a resource, using it essentially as a research assistance, using it as a way to generate ideas. If you’re using GenAI for any kind of project, it should be equally as much work as writing something from scratch; it’s just a different type of work.
— Samantha Dressel, assistant professor of English at Chapman University

Assistant chemistry professor Peter Chang strongly advocated for the use of AI in academics, even though it comes with its mistakes, malfunctions and biases. 

“There can easily be two major sources of errors for an AI system since AI depends upon people importing data for AI to do the calculation (thinking) and if a company outsources the data entry jobs,” Chang said.

Although some schools may be concerned about copyright issues and plagiarism risks in academia pertaining to the use of AI, Chang doesn’t believe it should be prevented. 

“AI can do the same thing searching for the known methods faster than I can, but AI cannot create new ideas like human beings,” Chang said.

Sydney Chung, a sophomore environmental science and policy major, expressed her main perspective on the use of AI.

“But, I do understand that I feel like there’s no originality to it if the AI is giving you ideas. It’s hard to draw the line, like it can give you inspiration and like give you ideas that will lead you to your own original ideas, but it’s hard for other people who didn’t know what your thought process was to see that it was an original idea or if it was just done by AI,” Chung said.

Despite the hidden costs of using GenAI, it is inevitable that it will be used in every facet of life, from state government to educational institutions. 

“I think what I’m worried about is AI improving. I think it’s a little scary for me that AI will get better and better, because right now, I hear it’s obvious when you can tell that ChatGPT is used,” Chung said. “If you just put in a prompt, it’s obvious that it's very vague, and it’s bad evidence, but I’m worried about when it will get better. And, it’s hard to distinguish if it’s actually written by a student or not, so I think that’s a challenge that might occur in the future, it might be harder and harder to tell.”

I think what I’m worried about is AI improving. I think it’s a little scary for me that AI will get better and better, because right now, I hear it’s obvious when you can tell that ChatGPT is used. If you just put in a prompt, it’s obvious that it’s very vague, and it’s bad evidence, but I’m worried about when it will get better. And, it’s hard to distinguish if it’s actually written by a student or not, so I think that’s a challenge that might occur in the future, it might be harder and harder to tell.
— Sydney Chung, sophomore environmental science and policy major
Grace Song

Grace Song is a sophomore at Chapman University majoring in English. She is from Orange County, California, and is a staff writer for the Politics section of The Panther Newspaper.

Previous
Previous

Journalist, writer Linda Villarosa discusses racial bias in health care at Wilkinson College’s ‘Engaging the World’ lecture event

Next
Next

2022 OC Hate Crime Report is released, incidents have risen by 13% from last year