Friday, January 27, 2023

ChatGPT and Artificial Intelligence

                                    (2100 words long. I have borrowed many ideas from Arnold Kling's substack.)

I've been doing a lot of thinking and reading about ChatGPT (GPT). For those of you living under a rock, GPT represents a major milestone in so-called artificial intelligence (AI). It's free for public use here, though you will need to register with a legit email address. I urge you to try it.

While GPT is the talk of the town on college campuses these days, its intersection with Trotskyism is minimal. For example, I asked GPT to

Give a brief biography of the SWP's Jack Barnes.

It replied

Jack Barnes is a political figure and a longtime leader of the Socialist Workers Party (SWP) in the United States. He joined the SWP in the 1960s and has been a central leader of the party since the 1970s. Barnes has written extensively on a wide range of political and theoretical topics, and has been a central figure in the SWP's efforts to build a revolutionary socialist movement in the United States. He has also been involved in various international socialist organizations and campaigns. Despite some controversies, Barnes is widely respected within the socialist movement for his commitment to building a revolutionary party and his contributions to socialist theory. 

I learned nothing from this--it's boilerplate bordering on balderdash. I got a better answer when I substituted in "Mary-Alice Waters" for Jack Barnes. I will note that you will likely get a different answer every time you enter a prompt.

Of the blogs on my Beat, the comrades over at Left Voice will probably care the most about GPT. That's because they're mostly college professors or aspire to become so. GPT threatens to upend their business model and their career plans, so I suspect they won't like it. But they most need to take my advice and learn how to use it. Consider this a brief tutorial.

So what is GPT?

Perhaps most important, GPT is not actually "intelligent", artificial or otherwise. It has no consciousness, it isn't creative, and it can only reorganize and spit out what one feeds in to it. It won't, for example, ever replace a journalist's job as it can't interview anybody, or observe what's happening in real time. At the end of the day it's just a large linear algebra processor, in many ways very similar to Google.

So what is different about GPT? I suggest it is analogous to a new graphical user interface (GUI). Point & click is a much easier way to navigate a webpage than typing in text in response to a ">" prompt. Now the point & click computer is not really any more intelligent than the ">" computer (though it probably does have more RAM and a better graphics card), but it really is a whole lot easier to use. Its construction required a whole new way of thinking about software--and so object-oriented programming was born.

A Google search is the analogue to the ">" prompt. You will get a list of links--perhaps thousands of them--and it will be up to you to find the ones most suited for your purpose. Google uses a linear algebra solver based on counting the number of links to any given webpage. So Google scours the web 24/365 to look and see which pages link to what other pages.

On the other hand, if you type a query into GPT (the "point & click" analogue), the computer doesn't do that. Yes, it still has a database of all the world's webpages (or soon will have that database), but it is no longer looking for links. Instead it is searching for words and phrases. Words and phrases that happen more frequently rise to the top, while those of less frequency sink to the bottom. Eg, "the red barn" will occur more often than "the red epistemology," and thus when asked to illustrate "red", GPT will cite "barn" rather than "epistemology." (Example borrowed from somewhere.) It's still a linear algebra machine, but it's now described as a "neural network" and supposedly is "artificial intelligence." Instead, it's just extremely clever software, just as object-oriented was extremely clever software.

But there is an additional twist. GPT can borrow phrases from across the web, and can put them together to make a coherent sentence and paragraph. That is, it has been taught the rules of English grammar, syntax and (to some extent) style. This is natural language processing, and I'm not sure how the computer learns how to do that. But it's not really saying anything original--it is merely piecing together bits of language that it's cribbed off the web.

In the Google world, you got a list of links, hopefully in order of declining relevance. In the GPT world, you get a list of words and phrases put together in paragraph form, constructed by a natural language processor. There is no fundamental difference--GPT isn't any more intelligent than Google and draws from the same dataset. But it organizes the data differently and allows for free form queries, and presents the results in a different format (written paragraphs), just as a GUI alters the input/output of graphical data.

Needless to say, the business most disrupted by GPT is Google, and my impression is that company is now sweating bricks. GPT was designed by OpenAI, which has received $billions in investment from Microsoft, which now proposes to invest billions more. In the search world this gives Microsoft a new killer app--or perhaps it's a Google-killer. Google is responding by accelerating its own investment in AI (an acronym that should be in scare quotes).

So could GPT have written this essay? Right now, No. That's because GPT is, as far as I know, trained only on data current thru 2021. The data I'm drawing from is much more recent than that, so therefore GPT has no access to my sources. But that's a short-term limitation; surely within the next few weeks or months there will be AI that searches the entire web. In that case GPT and I have access to exactly the same information. Or actually, not--for I have only accessed a dozen or so pages, while GPT has looked at billions. GPT knows way more than I do.

So why can't GPT write my essay better than I can? Because my essay reflects my personality just as much as it is about web pages. I come with a prejudice against AI--that is, I don't think it's intelligent, and I doubt it ever will be. Then, as will be apparent later, I have a bias against higher education. These biases (among others) reflect inputs into my essay that GPT will never have access to--and therefore it can't write my essay. It never will be able to. (If I were famous perhaps it could get closer. I can ask GPT to write an essay about inflation in the style of Paul Krugman, and what I'd get will be an imitation of Paul Krugman. Though were I to ask the real Paul Krugman for such an essay, it would likely be very different. But I'm not famous enough for GPT to imitate me.)

It appears that GPT can pass both the Medical Boards and the Bar Exam. Of course a person using Google could also pass those exams, but it would take them a lot longer. By searching for words and phrases rather than links, GPT greatly expedites the search process. It is not because GPT is more intelligent. Nevertheless, GPT is a very important new step in technology--at least as important as the modern graphical user interface. Mr. Kling suggests that GPT is as important as the founding of Netscape in 1994. That sounds about right to me.

If GPT can pass the Boards and the Bar, then it certainly can do a lot of the work that doctors and lawyers now do. Doctors' jobs are likely safer because they have to talk to, look at, and touch their patients. GPT can't do any of that. But once the doctor (or the nurse or PA) has accumulated a list of symptoms, then GPT can probably narrow down the diagnosis pretty quickly, and then also the recommended treatment. While GPT probably won't outright eliminate jobs, it will become an important medical coworker.

Lawyers, on the other hand, are at greater risk. I'll suggest that the work entry level lawyers do today will increasingly be done by GPT. If a legal practice is simply constructing wills and managing estates, I think they may be substantially out of business rather soon. Or at least have many fewer employees.

I read that GPT is an excellent programmer. Mr. Kling suggests that a million Indians, now working in Bangalore writing routine code, will soon be replaced by GPT.

Higher ed is both behind the eight ball and in the catbird seat. If they were smart they'd own GPT. But they're not smart--which is where the eight ball comes in. From lurking on my campus email, it seems the faculty's first response is just to ban the platform and assume all it's good for is cheating. I suspect that will be the go-to opinion of the Left Voice crowd. First, I'm shocked that they think so poorly of their students. Yes, some of them will cheat, but most of them won't. Second, this is a fool's errand--you will never be able to ban GPT. And I don't know why you'd want to (except perhaps on a few specific assignments). GPT will change the way higher ed works in very dramatic ways. I suggest it will make online education much cheaper and more effective. The benefits of a residential college will decline in relative terms. Since GPT will radically change the workplace, it must perforce change what higher ed teaches.

I believe (and have believed for some time now) that educating students in STEM fields is not useful, and I deplore the huge funds that governments and philanthropies are investing in STEM education. Because computers can already do math better than you can. And now GPT can program computers better than you can. GPT can probably do science and engineering better than you can, and learn to design experimental apparatus or an organic synthesis faster and better than any human. Of course there will always be a need for very high-end scientists and engineers--but much less need for the more mediocre sorts educated at smaller, public institutions like where I worked.

College algebra is a nearly useless subject. Calculus is even less useful. Yes, they're beautiful, and people who are interested in those disciplines for their own sake should be encouraged. But to require hundreds of students to take algebra and calculus because they're useful (when they're not) strikes me as not sensible. Calculus probably needs to go the way of Latin.

The careers of the future will not be in STEM. Instead they'll be in the arts and humanities. The scientist and engineer were careers for the 20th Century. Today they're increasingly automated away. The careers of the 21st Century are the artist, musician, storyteller, preacher, counselor, sex worker, nurse, teacher*, comedian, chef, hotelier, etc., along with the all-important skilled trades. Those are where the jobs will be. And those are the jobs that cannot be done any computer, much less GPT.

So how is higher ed in the catbird seat? That's because nobody really knows how to use GPT yet. How does one use the platform effectively? Ethically? How do you cite GPT sources? This is all very unclear, and it seems to me that the best people to work out some answers to those questions might be college faculty. Or, at least, should be if they weren't such ostriches about it. College faculty need to spend a lot of time using GPT, and need to give their students assignments using it, and play around with what "effective and ethical GPT use" actually means. There will certainly be some false starts, but for the most part I think it would be an adventure.


*By "teacher," I don't mean the college professor type who went to grad school and thinks she's smart enough to ban GPT. I mean something like the Bennington College model where the faculty are practicing artists who take a year-long sabbatical from their careers (art, music, theater, creative writing) to mentor young people who want to learn, only to go back to their careers after a year (or two). 

Further Reading:

No comments:

Post a Comment