Frustration over incompatible worldviews
Sep. 8th, 2017 07:56 pmI have been to a conference on existential risk to humanity today and yesterday. Basically I went because someone at my department was the organizer, and I was curious about the subject--it was aimed at a fairly popular (or at least multi-disciplinary) audience.
It was interesting but also super frustrating. This is best illustrated by a conversation I had with a leading AI researcher at the conference dinner. He was doing research on the risk to humanity from intelligent AI. I said that I don't know much about that, but if he thinks there is a risk I approve of him doing research on that risk and trying to lessen it. I then talked about things that I (from my perspective as being engaged in the environmental movement for 15 years) think are risks to humanity, such as climate change, loss of biodiversity, degradation of soils, overpopulation and overconsumption, etc. He dismissed all these and said that technology would definitely solve these minor issues and I did not need to worry. He also said that since there existed a scientific solution to climate change (stop emitting CO2) the problem was now trivial and he did not care about political stuff.
This is like telling a starving person that you have theoretically solved their issue (they need to eat) and the fact that they do not actually have any food is a trivial problem!!!
AAAAAAARRRRRRGH. I showed humility by acknowledging his potential risk, since it is not my subject. Did he show any humility by saying "well, you probably know more about environmental issues than I do, maybe your risks might be worth taking seriously as well, just in case." NO HE DID NOT. I found myself in the curious position of having much more in common with the economist from the business school who was on my other side--but then, he was actually working on environmental economics.
One of the speakers talked about how some risks were "sexy" (AI, aliens, etc) and got a lot of attention from the existential risk people while they ignored the mundane, "unsexy" risks such as the ones I listed above. Was it a coincidence that this speaker was a woman and almost all the other ones were male? I THINK NOT. I had a quite theurapeutic conversation with her afterwards where we vented at each other. She said that she had found that these techno-optimists had an almost religious worldview where it is very hard to reach them with arguments.
It was interesting but also super frustrating. This is best illustrated by a conversation I had with a leading AI researcher at the conference dinner. He was doing research on the risk to humanity from intelligent AI. I said that I don't know much about that, but if he thinks there is a risk I approve of him doing research on that risk and trying to lessen it. I then talked about things that I (from my perspective as being engaged in the environmental movement for 15 years) think are risks to humanity, such as climate change, loss of biodiversity, degradation of soils, overpopulation and overconsumption, etc. He dismissed all these and said that technology would definitely solve these minor issues and I did not need to worry. He also said that since there existed a scientific solution to climate change (stop emitting CO2) the problem was now trivial and he did not care about political stuff.
This is like telling a starving person that you have theoretically solved their issue (they need to eat) and the fact that they do not actually have any food is a trivial problem!!!
AAAAAAARRRRRRGH. I showed humility by acknowledging his potential risk, since it is not my subject. Did he show any humility by saying "well, you probably know more about environmental issues than I do, maybe your risks might be worth taking seriously as well, just in case." NO HE DID NOT. I found myself in the curious position of having much more in common with the economist from the business school who was on my other side--but then, he was actually working on environmental economics.
One of the speakers talked about how some risks were "sexy" (AI, aliens, etc) and got a lot of attention from the existential risk people while they ignored the mundane, "unsexy" risks such as the ones I listed above. Was it a coincidence that this speaker was a woman and almost all the other ones were male? I THINK NOT. I had a quite theurapeutic conversation with her afterwards where we vented at each other. She said that she had found that these techno-optimists had an almost religious worldview where it is very hard to reach them with arguments.
(no subject)
Date: 2017-09-08 06:36 pm (UTC)(no subject)
Date: 2017-09-08 07:38 pm (UTC)(no subject)
Date: 2017-09-08 09:29 pm (UTC)AHAHAHAHA that's very cute! I mean, it's his own decision whether he wants to care about political things, and fine if he doesn't want to (I am not super inclined that way myself), but to say that the problem is trivial because there exists a scientific solution is completely... wrong.
Although he sounds like a certain class of academically-oriented people I know, where it's all about who postures best, and quite possibly the way to get respect from someone like that is not actually to show humility but rather to assert loudly, using as many facts (even when irrelevant) and logic (even when illogical) as possible to buttress your claim, that his problem is trivial and yours is much more important. (Though frankly incredibly unpleasant, and also the VAST majority of actual smart people, many in academia (as opposed to academically-oriented people who think they're smart) that I know are actually quite good at showing humility and interest in other points of view (which is a big plus when doing research).
(I know I"m not saying anything you don't already know. I'm just SO ANNOYED on your behalf.)
Also, I suppose this is kind of undercutting your point, but I find it hard to take people seriously who really believe in the risk to humanity from intelligent AI. I don't work in the field exactly, but I'm kind of tangential to it at times (have worked on learning algorithms, etc. in the past) and... it's just... it doesn't really make any sense to me to worry about it. It's like saying you're worried that dolphins will evolve superior genetic capabilities and enslave the human race. It's... I guess you could say that there's a nonzero chance it COULD happen? And maybe it might be interesting to think about and to wonder about how it might happen, and maybe a couple of smart people might want to think about it? But... let's just say there are a lot of other things that will get us first. I will say that I went and looked up some people and there ARE well-regarded people in the field that disagree with me about the size of the relative risks (I would put it at practically zero, but others would put it higher), but I think most would agree that many other things would pose a much greater risk in the short term.
Also, this "leading AI researcher" -- I just find it hard to believe that someone actually high up in the field is saying this kind of nonsense. The most important current risks from AI as I understand them are actually not totally different from the reasons why climate change is hard -- if an AI is put in control of things, what kind of objective function is the AI using to control whatever it is it's controlling (humans, human society, whatever), who designed that function with what inputs, what unintended effects did it/does it have, what is it ignoring. We already see this with learning algorithms and algorithmic ways of interpreting data -- think about using testing for grading teachers, for example.
Ugh, sorry for the rant, this is just pushing all my buttons.
She said that she had found that these techno-optimists had an almost religious worldview where it is very hard to reach them with arguments.
Yeah. It's really a little odd.
(no subject)
Date: 2017-09-08 10:15 pm (UTC)Okay, he might not actually be a leading AI researcher, maybe rather that the speakers in the conference are big names in existential risk from AI. Which is maybe not the same thing.
And yeah, dismissing politics (in a broad sense, not just party politics) is just incredibly stupid, because that is how things get worked out. Like, maybe I start out wanting to do something about X, and to do that I have to do Y, and to do that I join an organization working on this, and then maybe I end up learning accounting to help the organization that I joined keep their finances in order. And some people might find that boring or stupid, to want to fix climate change but end up doing accounting, but it's not actually stupid. This is how you change things, with lots of people doing something, and maybe it's indirect, and maybe it will fail, but how else are you supposed to do it (aside from making better lifestyle choices)? It's not like you can play god and just dictate your will and it will happen...
(no subject)
Date: 2017-09-09 03:17 am (UTC)Now this I actually believe. I feel like there are a bunch of "big names" in existential risk from AI who are sort of these people who have self-taught themselves from what they think is reasonable... and, I mean, the problem with this is that for every Einstein who figures out relativity in a patent office there are about a thousand people who, because they haven't gone through what's already known and why it's known, come up with ways to exceed the speed of light and think they're all brilliant. But anyway.
And yes, the other thing I'd append to what you said (all of which is so true) is that people are MUCH harder than physics, in general. Like, how do you get a bunch of people to work against their self-interest? What does self-interest even mean in this context? (e.g., there are tribal-ish loyalties that start coming into play -- it's all very complicated!)
(no subject)
Date: 2017-09-09 09:40 am (UTC)Also, another thing I wanted to say was that they had a different conception of what "risk" was than I had. For example, this is a future I would consider to be a pretty good future for humanity: we figure out our environmental issues and stabilize on a sustainable level of population, survive a million years longer, and then die out from whatever reason. They would regard this as a failure because we eventually died out.
OTOH, they would consider this future a success: 99.9% of the people alive today suffer and die prematurely throughout the next hundred years until only a remnant is left. However, that remnant manage to get away and colonize space. This they would regard as a success because we got away from earth!
ETA: Possibly I am exaggerating slightly in the above examples. But only slightly.
ETA 2: And probably a further difference is that I think any effort to colonize space will 1) most probably fail because we can't survive without the ecosystem we are part of, and 2) exhaust resources that we need here on Earth.
Oh, oh, and also: there was this guy who said that in a seminar next week, he would present a proof that HUMANS HAVE NO VALUES. I expect that he is either going to 1) define the word "values" in a pointless way, or 2) use an inappropriate explanation level ("humans are made of molecules! molecules have no values! therefore humans have no values!"). ARRRRRGH.
(no subject)
Date: 2017-09-09 08:15 pm (UTC)I... don't think you're exaggerating a great deal. And, I mean, I am speaking from a perspective here where I actually kind of understand where they are coming from: if you are a sheltered technical person that likes everything to be consistent and completely rational, and you construct an objective function for humanity (by constructing a suitably smooth abstraction for "humanity") that has an extremely large value for "gets away from earth," then sure, it makes perfect sense! And I get this because I grew up as a sheltered technical borderline-Aspergers person who made those kind of judgments. (And, hilariously to me now, actually wrote a MacGyver future!fic as a teenager based on basically the premise that this was the future implemented by a shadowy conspiracy government of highly competent megalomaniacs. But anyway.) ...But then I grew up.
And probably a further difference is that I think any effort to colonize space will 1) most probably fail because we can't survive without the ecosystem we are part of, and 2) exhaust resources that we need here on Earth.
This is interesting to me! I am not sure about (2) (I mean, you're right, but does it depend on who's making the decision about "need") but (1) seems extremely reasonable and I feel like I don't hear people talking about it.
and also: there was this guy who said that in a seminar next week, he would present a proof that HUMANS HAVE NO VALUES.
...Heh. My response is, again, that's very cute. HUMANS ARE ACTUALLY KIND OF DIFFICULT TO MODEL, YO.
(no subject)
Date: 2017-09-09 10:21 pm (UTC)Yes, I know that it arises from assigning value to future humans that don't exist yet, which is a reasonable thing to do, and then assuming that if we colonize space, there could theoretically exist a very very large number of humans. But it can then turn into this abstract number game where you pretty much ignore the actual humans who are living now.
but (1) seems extremely reasonable and I feel like I don't hear people talking about it.
Kim Stanley Robinson makes that argument in Aurora, which I thought was a very interesting book.
About 2), I guess I'm also talking about not just resources--like, say you burn a lot of fossil fuel while trying to colonize space, which then fucks up Earth's climate further.
ARRRRGGH
Date: 2017-09-09 12:57 am (UTC)This reminds me of the joke about the physicist, the engineer and the mathematician who encounter a fire in the kitchen.
The physicist determines how long the fire's been burning, the volume of combustibles, calculates the expected duration of the fire, finds a fire extinguisher and puts it out.
The engineer grabs the fire extinguisher & puts it out.
The mathematician glances at the fire and at the extinguisher, says, "There is a solution!" and goes back to bed.
Re: ARRRRGGH
Date: 2017-09-09 09:45 am (UTC)I am now going to copy-paste from my comment to cahn:
Also, another thing I wanted to say was that they had a different conception of what "risk" was than I had. For example, this is a future I would consider to be a pretty good future for humanity: we figure out our environmental issues and stabilize on a sustainable level of population, survive a million years longer, and then die out from whatever reason. They would regard this as a failure because we eventually died out.
OTOH, they would consider this future a success: 99.9% of the people alive today suffer and die prematurely throughout the next hundred years until only a remnant is left. However, that remnant manage to get away and colonize space. This they would regard as a success because we got away from earth!
Possibly the above is a slight exaggeration, but only a slight one.
Oh, oh, and also: there was this guy who said that in a seminar next week, he would present a proof that HUMANS HAVE NO VALUES. I expect that he is either going to 1) define the word "values" in a pointless way, or 2) use an inappropriate explanation level ("humans are made of molecules! molecules have no values! therefore humans have no values!"). ARRRRRGH.
(no subject)
Date: 2017-09-10 08:13 pm (UTC)Relatedly in regards to women/men, I was at a conference recently with only one female speaker, who commented on the fact that she was the only female speaker - unfortunately she then went into how the other invited female speakers had cancelled, because the name tags had pins, which ruin silk shirts (which she was wearing) - she may have been going for a metaphor about men arranging conferences, science etc. from only their own perspective and with only their own needs in minds, but it mainly ended up sounded like she was whining about her shirt being ruined by the pin. And sadly I got the impression it made the men in the audience dismiss her actual point which was she was the only female speaker inbetween 7 male speakers!
(no subject)
Date: 2017-09-11 08:27 pm (UTC)(no subject)
Date: 2017-09-11 03:01 pm (UTC)(no subject)
Date: 2017-09-11 08:28 pm (UTC)