Midnight Pub
Mechanization, Step 1
~zampano
(On becoming slightly more machine, unequally.)
I was recently diagnosed with sleep apnea, which would certainly explain why I’ve been tired every morning for the last…long time. For awhile I assumed it was a (then-new) antidepressant, but apparently that’s not it, or at least not the *whole* it.
I began treatment a couple nights ago, which consists of connecting myself to a fancy air pump to make sure my lungs get a steady stream of air overnight so that my whole system doesn’t go into panic mode. I’m not used to it yet, but I definitely noticed that my feelings of sleepiness since I started were qualitatively different from how I felt before.
It’s been an adjustment in other ways. This feels like the first step over an event horizon into being “old.” Sleep apnea is a condition that I certainly associate with older people, and the little information I’ve seen suggests that the risk increases as you age. It’s also more common in men, so I have that going for me.
Even once I hit 40, I’ve been at most vaguely conscious of my increasing age. I start sweating *much* faster than I did even a couple of years ago, and while I know my muscles don’t recover as fast from serious exercise as they used to, it doesn’t feel like it’s a significant change (it was doubtless more gradual, too). Beyond that I’ve been very fortunate; my physical health has remained quite good, with nothing more serious than the odd infection that we all get. While sleep apnea isn’t especially serious (even if it can cause plenty of long-term effects that are), there’s definitely a sense of everything being downhill from here.
I’m also fortunate in being able to get it treated. I don’t know what this would cost if I were paying out of pocket, but I imagine it’d be significant. Aside from getting the initial diagnosis, management via CPAP is typically life-long and requires regular replacement bits. I have insurance, but because I live in the United States, that’s not really guaranteed. So it is that I can’t but see this as another example of technology doing more for those of us that are already in a better position. Even if I’m hardly “wealthy” (depending on the standard), my family is quite comfortable, and as a civil servant my employment is about as secure as it gets.
This is why I think that many of the fears being raised about large language models (LLMs) like ChatGPT are misplaced at best, if not outright self-serving. For the latter, one only need look at that open letter that made the news a few months ago that was signed by all kinds of tech “luminaries,” most of whom have their own “AI” tech (more about the scare quotes in a moment) in the pipe, and so are hoping to use regulations by a woefully ignorant Congress to both slow down OpenAI and wall off the broader technology from any late-comers. They have deep enough pockets to avoid whatever paltry regulations governments can actually put in place, but that is unlikely to be the case for an up-and-comer without a whole lot of outside funding. Moreover, given that the Age of Free Money seems to be more-or-less over, at least for now, it’ll be interesting to see just how many start-ups can lose money for years and keep getting new funding. The Netflix approach (lose money while basically creating a new business type, then jack up prices when you’re entrenched) doesn’t seem consistently effective, as there’s always a bigger fish who can eat the cost much more easily.
That said, I do think there’s a danger in going too fast. But it’s not a case of having insufficient regulations, rather a case of rushing to adopt a technology that isn’t actually fit for purpose. We’ve already begun to see instances of ChatGPT or similar models “hallucinating,” which means “making things up.” The recent high-profile case of a law firm in New York using ChatGPT to create a legal brief (and the sanctions that resulted) are a quasi-example of this. (“Quasi-“ because the lawyers’ pleas of ignorance were not deemed credible by the presiding judge.) The question of funding I mentioned above also crops up here, as trying to get all those VC dollars creates a huge incentive for over-hyping whatever it is you’re doing, as well as using various buzzwords that venture capitalists haven’t seemed to figure out yet, even if the rest of us have. (Looking at you, blockchain.) See also: the wildly over-stated claims of LLMs’ actual capabilities.
As opposed to 20+ years ago, there seems to be a big push in the business (and thus government and other fields that shouldn’t be businesses but are) world to be the first to adopt a new technology, if for no other reason than the marketing that can be done with it. This becomes increasingly dangerous as the technology involved becomes less understandable (even to those creating it) while simultaneously being given more responsibility. It’s virtually impossible to debug a statistical model (which is what these learning models or whatever actually are; they don’t think) that is entirely opaque. How do you figure out what factors are being weighed by how much when there are millions of them? My favorite example of this was a story I heard on the radio awhile ago, where some researchers were training one of these programs to diagnose lung cancer. While in many cases it was accurate, they also discovered that the program was including the name of the facility that was printed on the x-rays it was viewing. Its statistical model weighed *everything*, including the fact that (in this case) someone in a hospital was more likely to have (or get) a lung cancer diagnosis. Thus the very fact of being *in* the hospital was itself a factor that made the software more likely to “find” cancer in a given image.
In this case, of course, the flaw was able to be found. But as these models become more complex (to say nothing of proprietary), how soon before that complexity exceeds our ability to meaningfully evaluate their effectiveness? While even some error can be acceptable, especially if it’s better than whatever it’s replacing, that assumes that we can actually figure out the error rate. Meanwhile, our legal and regulatory systems, at least in the West, are entirely based on proving that something went wrong that shouldn’t have. How do you do that with a black box? How do you hold a company accountable for an error that was unforeseeable? (At least in the U.S., foreseeability is a major element of negligence.)
There’s also a tendency, at least among some “entrepreneurs” (or however they’re styling themselves these days) to assume that “new” is automatically *better*. This was my big takeaway from the recent sinking of the private submarine *Titan* on its way to view the wreck of the *Titanic*. Since the sub’s disappearance, more and more stories have surfaced of the company (and its CEO) ignoring safety warnings, both internal and external. It seems in many ways to be a case of Dunning-Krueger; the CEO seems not to have known enough to know why, for example, using carbon fiber in the hull of a deep-diving submarine is a bad idea. The simple explanation, as I understand it, is that carbon fiber doesn’t deform like steel does as it starts to be put under too much stress, but it instead simply shatters. This is also why the safety system the company relied so heavily upon, acoustic sensors that were supposed to detect the beginnings of hull damage, were not enough. OceanGate (the company involved) seems to have assumed that the beginnings of any failure would be detected in time to do something about it. This was clearly wrong.
The hull design this company designed was new in a sense (or at least different), but there was a very good reason no one else was doing things that way. Reasons that OceanGate would’ve discovered if they’d tested it properly, rather than simply hand-waive such concerns away. For example, the company CEO said that the reason they never had *Titan* certified by an outside agency was that their technology was simply *so* revolutionary that it would take them years just to get that agency up to speed on how it worked. Whether he believed this or not, OceanGate’s CEO (and four other people) would pay for these mistakes with their lives.
We also can’t forget that people have an inherent compulsion to anthropomorphize things. On some level we *want* the things we use to seem like they’re actually in conversation with us. You can go back thousands of years and see epigrams painted on clay jars that are written in the first person, seemingly by the object itself.
For the most part, I’m all for any new technology that can make people happier. If someone’s super isolated but can get that connection from a chat bot or whatever, so be it (and I’m not qualified to really evaluate any side-effects of this, which I imagine are highly situational anyway). My fear is that this tendency to humanize the tech we use will blind us to its inadequacies. This is of course what the companies hyping this stuff are hoping for. (Ted Chiang’s magnificent novella *The Lifecycle of Software Objects* gives a brilliant but different example of how this could work and some of the implications that we’re not really ready for.)
—-—
Ultimately it’s difficult to know *exactly* how ChatGPT and its ilk will do the things new tech always does (entrench most of the existing players, shift around a couple others, screw everyone else). I just know that I’ll be very much in luddite mode when it comes to anything “AI”-related for the foreseeable future.
Thankfully, the tech I use to sleep good isn’t reliant on that. Granted it does rat me out to my insurance company (I have to prove I’m actually using the thing for them to pay for it, which as insurance company bullshit goes isn’t too bad). It does make me think about what more stereotypical “sci-fi” tech would/will actually look like. Sure, you can have a bionic arm, but only the manufacturer will be able to repair it, and it’ll record everything you do and everywhere you go. And there’ll still be plenty of limbless people around.
ns
I'm a little anxious boy merely growing into his global consciousness and existential anxiety. My mental state had already been somewhat precarious during the whirlwind time when DALL-E 2 and ChatGPT rose to prominence and I subsequently lost many hours of sleep to mental catastrophization.
Now that the dust is starting to settle and I've familiarized myself with the capabilities of the newest technological toys, even going as far to use them daily and integrating them into my workflow, I sleep better. One part is me realizing that they're still a long shot from huge societal disruption. Another part is me realizing that even when things change, we adapt.
I'm probably not going to enter Luddite mode regarding these new-fangled technologies. But mostly I hope to avoid becoming the next evolution of Satanic Panic, worried for the future of a society with sinful pleasures like Dungeons & Dragons.
reply
zampano
You raise a good point in that we frequently conflate "new" and "worse" (or "dangerous"), a tendency that seems to get stronger the older we get. The devil you know and all that.
I don't plan to go full Luddite either, most likely. But (and hopefully this was clear from my post) I do plan to be skeptical about the promised capabilities of these things. I would also include in that any attempt to shoehorn a LLM into something that really doesn't need it. There is also the danger of a monopoly. All of these things are applicable with pretty much any new technology, and as you say we've managed to keep stumbling along despite ourselves.
"Disruption" has certainly become a buzzword these days, but here too it's typically over-hyped. Unfortunately, when there is disruption, it's usually the weakest among us who are most harmed by it.
reply