Midnight Pub

Some Brief Examples of the Unthinking Adoption of Technology (and a Solution?)

~immy

Some Brief Examples of the Unthinking Adoption of Technology

In Cal Newport's book Digital Minimalism he discusses the necessity of being intentional when adopting new technologies; He makes the critique that as a society and as individuals we adopt new technologies merely because we can, often without making a risk/reward assessment. The consequence of this is that we often find ourselves stuck with the consequences of a particular technology after it's already been widely adopted. In this article I want to discuss some examples and provide an alternative to uncritical technological adoption.

Exemplum Primum: Social Media

When the world was first introduced to social media we were promised that it would usher in a new age of easy communication and interconnectedness. While there are certainly positive elements of social media what we got was quite the opposite of what we were promised; social media has led us into an age of feeling increasingly disconnected from one another, not to mention the mental health effects and potential for addiction associated with social media.

Had we spent more time considering our mass adoption of social media we may have avoided this outcome; As a society we could have allowed social media to stay niche until we understood it better. Or at the very least some introspection may have informed us that being hyperconnected 24\7 is a bad idea.

Certainly the founders of social media sites knew that their websites were bad news from the very beginning. Mark Zuckerberg for instance has gloated about how "stupid" people are for surrendering their personal information to him. There are also a number of tech CEOs who won't let their children use smart technology. If they understood the harms of social media from the beginning we could have too.

Exemplum Secundum: The Smartphone

The smartphone is another example of a device that we have all too uncritically adopted. Admittedly the genesis of the smartphone was innocent enough, we were promised a device that would integrate the features of a PDA, a phone and an MP3 player into one device, it wasn't until later that all of the social media stuff that smartphones are known for today was added.

However, we still could have asked "how much power on the go is too much power?". Is a Nokia brick too much? Is a PDA too much? Is an iPod too much? Is a smartphone too much? Nobody was concerned enough to ask and now we're lumbered with the expectation that everybody carry around a device that is too powerful for the human mind to deal with unharmed.

What started as a promise of a device that would make our lives easier and keep us entertained and connected on the go has now become a public health hazard that has made us unhappy, anti-social, unable to maintain focus and addicted to our phones.

Exemplum Tertium: AI

AI is an interesting example of the uncritical adoption of technology for two reasons: The first is that it's a current example, we're only just beginning to adopt AI. The second reason why it's interesting is because it shows that we've learnt our lesson, people are beginning to push back against AI. They see through the promise that AI is something that will make our lives easier and result in humans having to do less work.

As with smartphones and social media the real outcome of AI will be the exact opposite of what we're being sold; though AI might make our lives easier in some superficial sense the reality is that it will result in people loosing jobs and the further homogenization of art (as eventually human artists will become "obsolete"). Not to mention that AI has the potential to make us lasier and more reliant on technology.

The Alternative/The End

Hopefully this article makes clear the necessity for critical thinking when adopting new technologies. To be clear, my intention isn't to push people towards the philosophy of Ludditism, a world governed by such a philosophy would only case more problems than it solves as some technologies truly are unambiguously good (or bring about more good than harm). What I'm suggesting is that when you join new website, purchase a new device or consider using a new technology you should mentally ask yourself "what value does this add to my life and what potential for harm exists here?", if more people had done this we might not be living through a catastrophe of smart devices and social media. If we start thinking critically now we might avoid the impending AI disaster.


tracker

Thanks for getting this conversation going, ~immy. Well said.

Like some other folks chiming in, I don't own a traditional smartphone, nor do I have any social media accounts. While I don't think I've been around as long as ~ew, perhaps my resistance to these technologies is related to my age.

I received my first cell phone (a feature phone, or dumb phone as they are now called) when I got my first job after graduating from college. I didn't like it, but it was a requirement for work. I remember having a conversation with an older fellow in my office who really, really did not like cell phones, and I remember wondering if he was right or just blowing their potential harms out of proportion.

All of this was, of course, many years before Apple released the first smartphone and Facebook appeared and started to entice everyone I knew into its shiny new gated community. At the time, I was in grad school and remember being quite busy with classes, clubs, relationships, hobbies, and so on. I could honestly care less that someone had posted a picture of what they ate for breakfast that morning on Facebook or any other social media site. It all seemed like narcissistic, navel gazing to me.

Fast forward to the present, and I'm living in an off-grid home that I built in the forested mountains of Vermont. I power my home and its conservatively designed systems with solar electricity and hand-harvested firewood from my land. I connect to the internet via satellite dishes powered by my solar panels. I use a Lightphone II and a VOIP phone for calls and texts, and I still don't have any social media accounts (unless, of course, we want to consider the Midnight Pub to be one).

My career is in environmental software development, so I do still spend an awful lot of time in front of my computer, but I do at least spend the bulk of my days just looking at text due to my environment just being full screen multi-monitor Emacs (EXWM FTW!). I appreciate many capsules and gopherholes on Gemini and Gopher for their simplicity and lack of addictiveness, while still often being interesting and informative. I even authored my own Gemini server several years back, which supports server-side scripting. I use this at work to host my software team's internal wiki, to try and have one less thing that requires a web browser during my workday.

And lastly, when it comes to the current surge in interest around AI, I am certainly impressed with how quickly the chatbots have improved the quality and diversity of their outputs. However, I don't really use this tech in any meaningful way. For now, I'm keeping an eye on it in the background to see what wild ideas people will try to apply it to next, but I haven't yet elevated it in my mind to a risk worthy of more concern. That may, of course, change in the future, but for now, I'm going to just keep on working as I always have.

Vive la résistance!

reply

plasmon

I think that there's a certain tragedy to how the technology boom/bust cycle works, really—there are legitimately cool ways to use AI, but it requires AI to suck at its job (like that AI-generated video of Will Smith eating spaghetti, or This Cat Does Not Exist). Smartphones are incredibly interesting devices, but thanks to companies like Apple being trendsetters, they've become massive, horrible slabs that barely fit in my pocket.

Joseph Weisenbaum had the right idea about computers being fundamentally conservative devices. Sure, computers can solve host a problems really well, but they prevent the need for new social inventions. Why attempt to find a better way to organize, say, a schedule when a computer can just solve an n-constraint Lagrange multiplier problem? Moreover, I've noticed that, at least within my own field of physics, there's a lot of people that like to fall back on sophisticated first-principles calculations rather than doing any real theoretical work—some people think that density functional theory calculations make the act of theorization secondary to plugging in the coordinates of atoms in a system. And that type of thinking—one built with the idea that the computer is superior for all problems, or eliminates the need for human work—is what gets the founder of Sun Microsystems to advocate for a "modern thinking" degree in place of English, History, and other various fields in the humanities.

We need technology that works *for* us, rather than against us.

reply

detritus

I have had a related thought, that in recent years the exponential rate of technological development has caused a situation that wasn't quite present at any point in the past. Think of the printing press. It was developed at a time when writing had existed for thousands of years already, and books had existed for a good many centuries, too. The press brought about a revolution in the distribution of knowledge, but it would take nearly a thousand years before new (modern) publishing technologies would make their appareance.

In the present, it is like we developed writing, books, the printing press and ereaders/epubs all within the span of 20 years. Mankind has not had the long spans of time to accomodate to changing technologies in a "natural" way, that is, they don't follow the periods of development that reach them to maturity. Instead, we hastily jump from one development to the next,we do not learn to use a technology optimally and we are already adopting the next one.

Case in point is the development of storage and computing devices. There's this vertiginous obsolescence cycle where you have to continually update your devices in order to keep up with the demands of the world. We went from 5-inch floppies to 3-inch floppies, to CDs, and apparently we've reached optimum size with USB sticks. Accordingly our laptops have evolved to accomodate each of these. Every 5 or so years a new kind of standard supersedes the last one, VGA to HDMI, micro-usb to type-c, etc.

Software is one of the biggest culprits of this. While I can run a decent linux system on a 15-year-old potato laptop, for the most part people are expected to have the latest processor to endure the demands of increasingly complex operating systems which offer little more functionality than they did 20 years ago. Don't get me started with mobiles. Not only are they monolithic black boxes, even a 5 year old mobile device won't run the latest version of MOST APPS because there are continuous software upgrades which stop supporting old devices --just for the sake of keeping the market active.

Interestingly, as I said, there is a proliferation of solutions that do not actually provide an increase in functionality to decades-old solutions. Take Obsidian, for example. A note taking app that uses a stack of technologies that build up in complexity and cruft, which does nothing you couldn't do with sed and awk. Of course, there is the benefit of being easy for end-users to start using quickly, but does that really justify all the computing power and a number of LOC somewhere in the millions? The essence of it's functionality was settled some 50 years back, text files and a file tree. The same applies to operating systems. Maybe this is the period of early development before we reach some kind of maturity; I like to think that at some point we'll go back to some semblance of simplicity or versatility. Right now we have the equivalent of fat unwieldy swiss-army knives where we could be using actual knives, scissors, saws, and corkscrews which do their job a lot better.

I am sure that at periods such as the industrial revolution, there would be a great proliferation of new techniques and technologies during a short span, and we may be right in the middle of one such period, so perhaps in a couple decades the whirlwind may settle for a bit and we will be left with the lessons of this rapid development and a basis for a solid further development of whatever technologies end up staying.

I don't give a flying F about AI, to be honest. I hardly see any value in it. Sure, as a PA it is probably a charm, but not at it's current state, when it still makes a lot of shit up and it yet doesn't hold the ability to follow a train of thought or carry an argument. At present it's mostly a shortcut to writing essays or drawing pictures, all of this devoid of style (or poorly emulating other styles, always leaving it's eerily synthetic imprint.) I hold fast to the idea that AI is heavily overhyped, and in a few years hopefully it will be seen, beyond the novelty, for what it really is: a natural language processor.

In conclusion, I think in about 20 or 30 years we may find our technologies reaching a sort of "metastable" configuration, reaching, as it were, their final form, in which hopefully everything will find it's place. AI may become a smarter clippy, well, hopefully it'll actually be at all useful.

reply

tffb

Wow, well written and considered response this this post. I enjoy reading the responses here. I agree, too, a plateau of online (or online curated) knowledge will arrive after some time, and then it's all a discovery game. We are likely already there. Then it isn't WHAT is there, or HOW to find it, but just individuals deciding that they are INTERESTED in WHATS there and HOW they get it.

reply

ew

Hello,

I never got the hang of "social media" after a short appearance in nntp in the 90s, I tried to make a smartphone a useful tool and failed, twice! And I despise the AI hype.

That being said, what is it, that apparently makes me somewhat resistant? I really do not know. Having been raised without a television might help. Having been very short on cash until my mid thirties has left a permanent mark on me --- not many movies, no ipod like thing, no computer until I started worklife. I cannot understand, why everyone and their dog are wearing ear pods for hours every day. Give your brain a rest, please! Having a university education in science might help to stay rather skeptical. I have not developed much of FOMO (fear of missing out) --- because no one can experience a substantial fraction of what is going on on this planet in parallel at any time. Scientific? :) No, I was not raised in an Amish or similar context. Maybe I have had too many courses in philosophy, who knows.

But life without the things mentioned above is clearly possible. Otherwise I would be dead since a long time.

~bartender? Just coffee for now. You mind if I open the window to listen to the birds and the wind for a bit? Thank you!

reply

tffb

I see what you're saying here, and I agree - conscientious and deliberate use of technology, be it for prolonged use or one-time use cases, *should* happen across one's mind. Not in solid law or moral good/bad, but it is simply a good idea to know WHAT one is doing before jumping in an DOING it. Taking into account consequential outcome both online and "irl" (albeit) people think "what happens on the Internet, stays on the Internet" (which is now just as much a part of our "irl" day-to-day as gas stations or grocery stores).

In regards to impending AI disaster(s) (I am sure they will be frequent and innumerable, like that of combustible engine use (yea, some build great machines, but also planes fall from the sky sometimes)), I truly believe what will happen, or start to happen, or whatever course the tech/AI circuit/innovationa bring us, will likely be unpleasantly disrupted by changes in weather events.

Nothing particular to look forward to, in any case, but welcome to Planet Earth 2024 years tired :D

reply

inquiry

There is a solution: people becoming more better than selfish fools.

Why?

Because selfish fools are magically good at one thing: rushing in to circumvent foolproofedness.

reply

starbreaker

I'd rather be wise and selfish than a selfish fool. But don't expect me to give up selfishness.

reply

inquiry

Probably different strokes, but my selfishness has been precursor to malady often enough that I can't help but see it as a sort of "pride cometh before a fall" grade blindness that gives foolishness a run for its money.

reply

starbreaker

When my selfishness has caused suffering, it's because I was not only selfish but short-sighted and thoughtless. One can act in an altrustic manner for perfectly selfish reasons simply by believing that doing so will eventually redound to one's own benefit.

reply