1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
|
---
title: "AI: the not-so-good parts"
date: 2024-01-16
tags:
- ai
- ethics
- philosophy
---
Hey, if you normally read the written form of my talks, I highly
suggest watching or listening to the video for this one. The topic I'm
covering is something I'm quite passionate about and I don't think
that my tone is conveyed in text the same way it is in voice. If the
version on XeDN doesn't load for you for whatever reason, please
[contact me](/contact/) with the output of
[cdn.xeiaso.net/cgi-cdn/wtf](https://cdn.xeiaso.net/cgi-cdn/wtf) and I
will figure out what is wrong so I can fix it.
You can find the YouTube version of this talk [here](https://youtu.be/EfAjITmLP50?feature=shared).
<XeblogVideo path="talks/2024/ai-ethics" />
---
<XeblogSlide name="2024/ai-ethics/001" essential />
Hi, I'm Xe Iaso and before we get started, I want to start by talking
about what this talk is and is not. This talk isn't going to be the
kind of high signal AI research that I'd really love to be giving
right now. This talk is about actions and consequences.
<XeblogSlide name="2024/ai-ethics/002" essential />
What impacts will our projects have on the real world where people
have to take objects like this and exchange them for food and shelter?
I'm sorry to say that this talk is going to be a bit of a wet blanket.
I'm so sorry for Yacine because all that stuff with local AI inference
in browsers was really cool. And that dogfooding of
[dingboard](https://dingboard.com/) for a presentation about how
dingboard works was cool as hell.
<XeblogSlide name="2024/ai-ethics/004" essential />
All the best things in life come with disclaimers, as I'm sure you
know, and these words are my own. I'm not speaking on behalf of my
employer, past employers, or if you're watching the recording and I've
changed employers, any future employers. I am speaking for myself, not
other people.
Before we get into this, let's cover my background, some stuff about
me, what I do, and all this AI stuff has benefited and harmed me
personally. As Hai [the organizer of the AI meetup that asked me to
speak there] mentioned, I'm a somewhat avid blogger. I've only got
like 400 articles or something. I write for the love of writing and
I've got like maybe four 3D printed save icons of text available on my
blog for anyone to learn with any topic from like programming,
spirituality, semiotics, AI, etc. My writing is loved by the developer
community and it's the reason why I get hired.
<XeblogSlide name="2024/ai-ethics/007" essential />
Regardless of anything I say in this talk, please make a blog,
document what you've learned, document what works, document what
fails, just get out there and write. You'll get good at it, just keep
at it. This is genuine advice.
<XeblogSlide name="2024/ai-ethics/008" essential />
However, as a reward for making my blog a high-quality thing, it's
part of the ChatGPT training dataset. Somewhere in some data center,
my blog's information is sitting there tokenized, waiting to get
massaged into floating point weights by unfeeling automatons used to
make unimaginable amounts of money that I will never see a penny of.
This is the punishment I get for pouring the heart, soul and love into
my craft as a blogger.
I get turned into ChatGPT.
<XeblogSlide name="2024/ai-ethics/009" essential />
Now in our system of law, things are generally lawful unless there's
some law or precedent that says it's not. At the time of me speaking
this, we aren't sure if training AI models on copyrighted
data is fair use or not. The courts and lawmakers need to battle this
out (if they'll be allowed to because there is a lot of money behind
the AI industry right now).
This is technology that is so new, it's making Bitcoin look like Stone
Age, 8-bit computing back when you couldn't count above 255 without
major hacks.
<XeblogSlide name="2024/ai-ethics/010" essential />
And mind you, I'm just one blogger. I'm just one person. I don't have
that big of a platform, all things considered. Sure in the genre of
technology bloggers, I'm probably fairly high up there, but I'm not
like front page on New York Times big. I'm just a person who likes
talking about computers and how they should work. I'm just someone
that gazed into the void too much and now people to pay me to gaze
into the damn void.
<XeblogSlide name="2024/ai-ethics/011" essential />
So how do we understand all this?
How do we figure out how to peel back all the layers of terminology
bullshit that keep us from having a clear understanding of what people
are even saying?
If we take all the drama and interplay involved in our society, we can
boil it down to two basic things, actions and consequences. Actions
are the things that we do and consequences are the things that result.
So let's say you cut a tree down to make a fire, but that tree was
used by animals to shelter them from the winter and now those animals
have a harder time finding shelter in the winter.
You take actions and something or someone else has to deal with the
consequences.
Most of the time our actions serve to make us better off and shield us
from the consequences. We see this happen with that tree that got cut
down. We will see this happen with ChatGPT and we will keep seeing
this happen time immemorial as society keeps repeating.
As exciting as all of this AI technology is, as a science fiction
writer, I can't help but see the same actions and consequences and
analyses for how we're using it today.
<XeblogSlide name="2024/ai-ethics/016" essential />
Now your pitchforks can go down, I see you out there, you holding them
up, I'm not trying to be a contrarian or decry AI as wrongthink. I've
been using AI for my own stuff and I genuinely think that there's a
lot of really exciting things here.
I'm mostly worried about how the existing powers that be are going to
use this surplus of cheap labor and have those actions have massive
consequences on us all.
<XeblogSlide name="2024/ai-ethics/017" essential />
One of the things I'm trying to get across here is not all "Capitalism
bad! Let's get back the bread lines, baby!" There's plenty of places
to see those arguments and I don't want this to be one of those. I
more want to inspire you to see what the consequences of your actions
with AI stuff could be so that we can make the world a more equitable
place.
Of course, this is made even more fun by the concept of unforeseen
consequences or downstream consequences that you couldn't have
possibly seen coming when you were experimenting with things.
<XeblogSlide name="2024/ai-ethics/018" essential />
As an example, for a long time people thought swans were white. Swans
became symbols of literary purity or something like that and it was so
common that there was an English idiom of a black swan being an
impossible thing.
As this photo proves, swans can be black.
And now the term "black swan event" describes something that should
have been obvious in hindsight but something that we couldn't possibly
have foreseen at the time.
(Begin sarcastic tone)
Just like that unmentionable-on-YouTube viral pandemic that happened a
few years ago that our society will never really recover from!
Scientists were warning us for years that we'd be totally screwed by a
viral pandemic but no, we didn't take them seriously.
(End sarcastic tone)
<XeblogSlide name="2024/ai-ethics/020" essential />
Whenever anyone takes actions and there are consequences or impacts,
you can usually model them as on yourself, your friends or the world
at large. I haven't found a good way to model the impact risk of a
given field very well, but I like triangles so I made this triangle
called the impact triangle to show what all of the factors in the
computer science industry are.
In terms of access, anybody can become good at coding and start
working at a company or creating a company to solve a problem that
they have in their lives. I'm pretty sure that this basic thing, the
computer industry is open access to anybody is basically why everybody
in this room is here today.
Personally, I'm a college dropout.
Without the industry allowing just about anyone to walk in the door
and start being successful, yeah, I'd still be in the Seattle area
probably working minimum wage at a fast food place. I wouldn't be able
to dream of immigrating to Canada and I probably would have never met
my husband who is so thankfully recording this for me.
There's also no professional certification or license required to
practice computer science or software development or whatever we call
ourselves now. And basically anybody off the street without
certification can make an impact on the world scale if they get lucky.
And then in terms of limits, our industry measures results in small
units of times like individual financial quarters. In aggregate, our
industry only cares about what we do to make the capitalism line go up
for next quarter and there's no ethical or professional guidelines
that prevent people from making bad things or even defining what good
and bad is in the first place. In an ideal world, the thought is that
the market should sort everything out and realistically, with the GDPR
and the like, there are some laws that enable, that force people to
comply but as long as you have good lawyers, you can get away with
killing murder.
For most other professions in the job market, our industry looks
incredibly reckless. Like, accountants need to be licensed and pass
certifications. If you want to call yourself a surgeon, you need to
have surgery practice, you need to have a license in surgery, and you
need to keep yourself up with the profession.
We don't have such barriers to entry.
As an example of this, consider Facebook. They have a billion users.
That is nine significant figures, a billion with a B as in bat. When
they made Facebook, the thought was that they could make everybody
better by reducing the social distance and that could make everybody
like happier and live more fulfilled lives.
An unimaginable amount of photos, video and text posts are made to
Facebook every day. Some measurable fraction of these violate
Facebook's community guidelines and are full at the very least and are
fully legal at the most. Many trivial cases can be handled by machine
learning algorithms but there's always that bit that needs to be
judged by a human.
Speaking as a recovering IRC op, content moderation is impossible at
small scales and the level of impossibility only grows as the number
of people involved in a thing grows. I am fairly certain that it is
like actually entirely impossible to moderate Facebook at this point
because there's just too many people. You have to have some machine
algorithm in there at some point and there are going to be things that
the algorithm can't handle.
So then you go and you use humans to rate that.
You contract out a company who very wisely decides to subcontract that
out because they don't have to deal with the fallout and finally it
ends up on the desks of people that are tortured day and night by the
things they are forced to witness to make rent.
For the action of creating Facebook and all of the systems that let
Mark Zuckerberg make a bunker on Hawaii, raise his own cattle, make
his own beer, and smoke those meats, he doesn't have to see those
images and things that the content moderators have to see.
He just lays back and watches his bank account number go up and maybe
does CEO things if he has to.
The human cost is totally discounted from the equation because the
only limit is what makes the capitalism line go up. The people doing
the actions almost never see the consequences because the CEO of Uber
never got his job replaced by an Uber driver. The CEO of Google never
suffered the algorithm locking him out of his entire digital life for
good with no way to get it all back. And the people doing the actions
and making the decisions are not affected by any of the consequences,
foreseen or unforeseen.
The last time I spoke here, I spoke about a work of satire called
[Automuse](/videos/2023/ai-hackathon/). Automuse is a tool that uses
large language models to recreate the normal novel writing process
using large language models and a good dose of stochastic randomness
to make some amusing outputs.
When I made it, I really just wanted to throw ink to the canvas to see
what would happen, then write [a satirical scientific
paper](https://cdn.xeiaso.net/file/christine-static/video/2023/ai-hackathon/automuse-2.pdf).
<XeblogSlide name="2024/ai-ethics/031" essential />
To my horror, I won the hackathon with a shitpost about the publishing
industry that was inspired my fear of what could happen if things like
Automuse were more widespread.
When I gave my talk at the hackathon, I had a five minute slot and
there was something that I had on my script that I cut out as I was
speaking.
Not sure why I did, it just felt right at the time.
The part that I left out was inspired by this quote from the
philosopher SammyClassicSonicFan:
<XeblogSlide name="2024/ai-ethics/033" essential />
When will you **learn**?
When will you learn that your **actions** have **consequences**?
I made Automuse precisely because I understand how impractical such a
thing is. The output quality of Automuse will never compare to what a
human can write no matter what large language model you throw at it.
Okay, yes. I did my research, there's actually a rather large market
for low quality pleasure reading that something like Automuse could
fill. There's a surprisingly large number of people that enjoy reading
formulaic things about good winning out over evil or old people
reading romance novels to feel the passion of being young again or
whatever. Not to mention doing something like that as a company would
leave me an excellent moat because most AI companies want to focus on
the high quality super output and here I am, the trash vendor going
in, yeah, I'd basically be invincible.
But I don't know if I could live with myself if I turned Automuse
into a product.
When I made Automuse, I knew that this was a potentially high impact
thing, so I crippled it.
I made it difficult for anyone to use, even me.
I made it rely on a private NPM dependency that is on a server that
only I have the API token to and it just so happens to be the thing
that generates random plots.
I also made it in a way that requires massive human intervention and
filtering in order to get decent results and every so often I get a
message from somebody that asks me:
<BlockQuote>Hey, how can I set up Automuse on my stuff?</BlockQuote>
And they're surprised when I quote them a five figure number to get
them to go away. And some are even angry and curse me out because a
person making open source software on the internet would want to be
paid for their time.
I can't understand that actually.
But above all, the reason why I really don't want to productize it or
make it available for mass consumption in any form is the problem of
book spam. Automuse would make the problem of book spam worse.
The Book Spam problem is where people upload nonsense to the Kindle
store and make boatloads of money doing it. This problem has been
accelerated by ChatGPT and is getting to the point where Amazon's book
vending thing actually had to implement rate limits for uploading
books.
I don't think I could live with myself if I made and released an easy
to use product that made that problem worse.
It's bad enough that whenever I get around to finishing my novel
Spellblade (I couldn't find the cover I commissioned, so I just put
the name on the slide), I'm almost certainly just going to release it
on itch.io or to my patrons for very cheap. In theory, the Kindle
store would be the best place for that kind of high signal original
fiction but I just don't want it to get flooded out in a wave of AI
generated mushroom foraging books.
I don't think that anyone at OpenAI anticipated that people would use
ChatGPT to make the book spam problem worse. I have a friend that
works there and generally from what I've seen, the research side of
OpenAI really has their head screwed on the right way.
The problem is the capitalism side of OpenAI getting that sweet, sweet
return to an investment by making a product that nobody else can
provide and then charging for the output.
<XeblogSlide name="2024/ai-ethics/039" essential />
Above all, the part that really confuses me is why we're automating
away art and writing instead of like snow blowing or something
actually useful. There's a subtle part of me that's really concerned
for the future of our industry and I really think we need to be aware
of it before it all bites us and like getting rid of everybody that
has aesthetic knowledge really seems like a bad idea for an industry
that focuses so much on design.
<XeblogSlide name="2024/ai-ethics/040" essential />
With the Industrial Revolution came factories. Factories allowed us to
produce objects on scales like never before. Raw materials go in at
one end, human labor goes in the middle, finished products come out
the end. This has allowed us to become the kind of species we are
today. You can circumnavigate the globe in 100 hours while playing a
contrived game show about travel. You can head to an entirely
different continent in like what, 12 hours and this has led us to
discoveries that have made us healthier, lived longer lives and
overall it's been a boon for the human race.
<XeblogSlide name="2024/ai-ethics/041" essential />
However, this is a modern assembly line for cars. Look what you don't
see here, people. All of those robot arms and the like represent jobs
that were done by humans, operating the crane to lower the truck body
onto the chassis, all of that stuff. With every new model year there's
more automation at play and less room for human jobs.
Sure, we can make more cars per hour but like every job that's not
done by a human is another family that can't make rent. It's another
child that can't grow up and you know actually cure cancer or
something. And I just feel like it's another way for the ownership
class to scrape more off the top.
With that in mind, I want you to consider this:
<XeblogSlide name="2024/ai-ethics/042" essential />
These are our factories, the open office environment. Instead of wool
or wood or water as input, we have user stories, electricity and
coffee. Many of the companies out there are really just assembly lines
for code features or Kubernetes configurations. I think the ultimate
dream of this lies in the idea of the T-shaped developer that I've
seen many management people talk about when they're trying to
reorganize their companies.
<XeblogSlide name="2024/ai-ethics/043" essential />
The core idea of the T-shaped developer is that you have really good
competency in one field and enough broad knowledge in other fields
that you can basically be put anywhere in a project and be useful.
This is why you see things like ephemeral teams or decrees from on
high that thou must write in JavaScript for all things.
And in theory, it makes it a lot easier to move people around and
place them wherever the company needs in order to make the process
more adaptable to the circumstances. Not to mention, if everyone's
just a T-shaped developer, that makes it really easy to get people off
of the street and into the job in days so you don't have to spend the
months training them on how you messed up Jenkins this time.
Ever notice that every job opportunity is only for senior roles?
This is why.
Usually by the time you convince companies to give you a title that
starts with the word "Senior", you've already been molded into a
T-shaped engineer and you can slot in just about anywhere.
This is our assembly line, created in the fear that if we don't do
this, the wrong line will trend in the wrong way and investors won't
give us as much money as freely.
Like, okay, I realize I'm doing some doom and gloom stuff here.
It's probably going to be a while until AI is actually able to replace
our jobs. Right now, there isn't a magic button that product teams can
use to "just implement that feature" based on the textual description.
That's probably a long ways off and it'll probably require a different
fundamental architecture than attention window transformer models.
But with that in mind, there's a segment of people that already have
the magic "just implement it" button today:
Artists.
Stable diffusion, mid-journey, and Dall-E 3 have gotten to the point
where the output is not just good.
It's good enough.
For the vast majority of people, as long as there's nothing obviously
wrong with the hands, you won't be able to tell an image that is AI
generated.
However, artists can tell instantly when you have an AI generated
illustration.
<XeblogSlide name="2024/ai-ethics/049" essential />
Just look at this one I used earlier in this talk. It's so bad. Look
at the stem on that flower. That is not how stems work. The brush at
the bottom is just blending into the easel in ways that physically
separate objects don't work. The flower that the robot is holding is
inconsistent. It looks like the light is coming from both forward and
backward at the same time. The antennae are melting into the shoulders
of the robot.
It's totally passable at first glance.
I'm pretty sure that before I mentioned all those stuff and put all
the arrows on the slide, you wouldn't have seen any of it. But when
you start critically analyzing it, it just falls to pieces.
I guess the better question here is why would you want to use an AI
generated image for something?
One of the big places you want to use an AI image is for the cover
image on your blog post because we've come to expect that blog posts
need cover images for some reason.
There's more desire for people to have cheap filler art that meets a
certain criteria than there are artists willing to work for
unrealistically low prices with incredibly quick turnaround times. Art
is everywhere and yet it's commoditized so much that it's worthless in
a day and age where rent and food prices keep going up.
So we end up with something like this:
<XeblogSlide name="2024/ai-ethics/051" essential />
You get an AI generated of assembly line of robots painting flowers.
This is really why I didn't want to develop Automuse into a company. I
just fear that action would have too many consequences and my friends
and fellow artists would suffer. This is why I did so much detailed
math about how much it would cost per word, how the quality would be
seen in the market, and what impact such a technology would have if it
churned out hundreds of books per hour.
Outside of the systems we live in, yeah, this AI stuff is great. It's
fantastic tech that allows us to do any number of things we couldn't
do before.
But inside the systems we live in, I can't say the help, let's see
this is yet another way that human labor is being displaced without a
good replacement.
And we wonder why we can't call ourselves engineers in Ontario. Do we
really engineer anything or are we just making the line go up?
When will we learn that our actions have consequences?
Until then I guess we need to prepare for unforeseen consequences.
Thank you all for watching this and I hope it gives you some things to
think about. I hope I didn't break too many taboos about the industry
in the process but who am I kidding? I just broke all of them.
<XeblogSlide name="2024/ai-ethics/061" essential />
Thanks to everyone on this list for inspiring me to take action and
pushing towards the presentation I gave tonight. Special thanks to
Mystes and Layl for really grinding hard into this, ripping in half
and telling me where I'm full of shit. Extra special thanks to my
husband for recording this for me and thank you for watching.
<XeblogSlide name="2024/ai-ethics/062" essential />
I recognize that this is like really a heavy talk. It'll probably take
you some time to surface some good questions about it but if you
happen to have them right now please feel free to ask. I will be happy
to answer but if it takes you a while to come up with it just email
[unforeseenconsequences@xeserv.us](mailto:unforeseenconsequences@xeserv.us).
It'll get to my inbox and I promise you I will reply. Have a good
evening and does anyone have any questions?
## Q&A
<BlockQuote>What was the sigil you displayed at the beginning of your
talk?</BlockQuote>
That was the sigil of Baphomet, one of the names for Satan as
celebrated in Satanism.
<BlockQuote>Do you see a future where AI technology can equitably help
humanity thrive?</BlockQuote>
I do see a future where it can be used to benefit us all. The problem
is the intersection of what could be, what is, and the tools in the
process where you get the real interesting stuff and there's probably
at least five good sci-fi novels you could write about this.
You could write a really compelling one about just what happened with
OpenAI and especially what's happened with the e/acc people. I wrote
the plot outline for a bad science fiction novel about the madness
that is e/acc.
<BlockQuote>What do you think we should do about this
problem?</BlockQuote>
Just be aware that your actions don't exist in a vacuum.
If you build something that could replace jobs, then you need to be
cognizant of the people that you're going to make unable to pay rent
because if you make something that replaces knowledge work labor, you
price them out of being able to eat. When people aren't able to afford
to eat, they especially can't afford to retrain themselves to work in
another industry that hasn't been taken over by infinite cheap labor.
<BlockQuote>
First, thank you very much for the presentation. I'm not debating
here. I'm very open for these type of discussions, but you show the
industrial revolution and the next slide was all the people who were
impoverished. I don't see it as a linear change though, so industrial
revolution and all those workers working in those situations by itself
was not a necessary, better situation than those workers in those
dangerous situations being replaced by robots on the other side. As we
move on, we never had any occasions that we needed to get rid of a
bunch of populations because we didn't have jobs for them, but we
eventually came up with solutions, new jobs, some sort of a solution.
So the main question is how do you see that change exactly from
industrial revolution to industrial revolution?
</BlockQuote>
At some level, this stuff is going to happen regardless, and if it's
going to happen, there should be some societal support mechanism, like
universal basic income (which no matter what study is made to prove it
doesn't work, actually does work) to replace the income that we're
losing to machines taking over jobs that were previously done by
humans. Something like universal basic income would probably help a
lot here, but I don't know.
I don't have any solutions.
I'm more trying to blow the whistle that there's a problem before it
gets bad enough that things become irreparable.
<BlockQuote>
All right, I'd like to commend you first on your courage to do this.
It's obviously difficult to come into a room and say the opposite. At
the same time, I'll give you the opposite and the pit that was out of
the pit. You know, one of the things that, to act your way a little
bit, automation is known to increase the standard of living. So we
have all great things we can do because of automation. So AI is
automation's superpower. Now to say there's no consequences of AI
being abused, there definitely will be, but looking at the greater
impact of it all, and I think that's the reason we're at all here, is
because we know that they're [unintelligible], but truly down, we know
that bringing abundance to the world is far greater and needs to be
substantial in that event.
</BlockQuote>
I mean, yes, congratulations. You actually got the point of the talk.
The point of the talk is to get you to think critically about what
these tools are, what's going on, and what the benefits could be as
well as what the downsides could be. I just don't know if our current
system of distributing wealth and resources i
|