Actuarial Outpost
 
Go Back   Actuarial Outpost > Cyberchat > Non-Actuarial Topics
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

Search Actuarial Jobs by State @ DWSimpson.com:
AL AK AR AZ CA CO CT DE FL GA HI ID IL IN IA KS KY LA
ME MD MA MI MN MS MO MT NE NH NJ NM NY NV NC ND
OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY

Reply
 
Thread Tools Search this Thread Display Modes
  #1  
Old 11-02-2017, 05:15 PM
Sredni Vashtar's Avatar
Sredni Vashtar Sredni Vashtar is offline
Member
 
Join Date: Mar 2010
Favorite beer: pilseners
Posts: 10,397
Blog Entries: 1
Default The Technological Singularity

Is it coming? Is it scary?

Thoughts? Feels? Reals?
__________________
L’humour est la politesse du désespoir
Reply With Quote
  #2  
Old 11-02-2017, 06:48 PM
DiscreteAndDiscreet DiscreteAndDiscreet is offline
Member
AAA
 
Join Date: May 2016
Posts: 478
Default

It's BS. Technological progress is about improving efficiency in physical processes and in information processing and there are hard upper bounds that you bump into. People who talk about the technological singularity have some sort of belief that when you achieve a certain level of growth in intelligence, it becomes a self sustaining process and I don't see that as a likely possibility.

AI isn't magical. There are tradeoffs in terms of accuracy, computation time and other resources for different methods of deduction and inference. They're also sensitive to the domain of the problem and there are results that make it unreasonable to expect general purpose problem solving algorithms that work everywhere. There's no evidence that you'll find a magic method of generating good answers that you can have a computer crank out all day. Bottom up methods such as simulating the brain in a computer are likely to be harder to implement than anticipated since the hard part is figuring out what aspects of the physical brain need to be reproduced accurately and which ones can be approximated without causing issues.

Going forward, yes you are going to have a surplus of computational power, but the problems of the world are going to be divided into problems where the answer is "shallow" enough to be located by throwing CPU cycles at it and those where the answer is "deep" and can only be located by stumbling upon the necessary insight.
Reply With Quote
  #3  
Old 11-03-2017, 10:04 AM
Sredni Vashtar's Avatar
Sredni Vashtar Sredni Vashtar is offline
Member
 
Join Date: Mar 2010
Favorite beer: pilseners
Posts: 10,397
Blog Entries: 1
Default

Quote:
Originally Posted by DiscreteAndDiscreet View Post
It's BS. Technological progress is about improving efficiency in physical processes and in information processing and there are hard upper bounds that you bump into.
I'm not really sure what you mean by this. The Singularity is usually about "Moore's Law". I think Moore's Law is slowing down a bit and will slow down more when it runs into Quantum Mechanics. But I don't think it will suddenly stop. I think we will press on, despite the hard road, and we will make it well past the point where Mother Nature stopped improving Her transistors. Because we won't have to worry about things like power-consumption.

Quote:
People who talk about the technological singularity have some sort of belief that when you achieve a certain level of growth in intelligence, it becomes a self sustaining process and I don't see that as a likely possibility.
Yeah, I don't know about the self sustaining stuff. When I think of the Singularity I think of designing machines that are 100x smarter than people. Not because of some self-sustaining miracle feedback loop, but because our own tendency to make computers faster and faster.

It seems like once computers pass us-- at anything-- they immediately transcend us 100x over.

Quote:
AI isn't magical. There are tradeoffs in terms of accuracy, computation time and other resources for different methods of deduction and inference. They're also sensitive to the domain of the problem and there are results that make it unreasonable to expect general purpose problem solving algorithms that work everywhere. There's no evidence that you'll find a magic method of generating good answers that you can have a computer crank out all day. Bottom up methods such as simulating the brain in a computer are likely to be harder to implement than anticipated since the hard part is figuring out what aspects of the physical brain need to be reproduced accurately and which ones can be approximated without causing issues.

Going forward, yes you are going to have a surplus of computational power, but the problems of the world are going to be divided into problems where the answer is "shallow" enough to be located by throwing CPU cycles at it and those where the answer is "deep" and can only be located by stumbling upon the necessary insight.
I don't see why "simulating a human brain" would be the only way to get generalized intelligence. Humans are not "magical", either. We are just animals with a lot computing power. It is really just a freak accident of nature that we can talk. If natural selection is capable of making that leap, out of total randomness, don't you think Google will be able to do the same?
__________________
L’humour est la politesse du désespoir

Last edited by Sredni Vashtar; 11-03-2017 at 10:08 AM..
Reply With Quote
  #4  
Old 11-03-2017, 10:19 AM
Bicycle Repair Man's Avatar
Bicycle Repair Man Bicycle Repair Man is offline
Member
 
Join Date: Nov 2003
Posts: 14,400
Default

I think most people have no idea how smart AI is already. There's nothing magical about the way brains work that can't be emulated on a computer eventually. It might be longer than 30 years away, but definitely less than 100.
__________________
Farewell, Necco Wafers. At least you died the way you lived: tossed away after someone realized they accidentally bought Necco Wafers. - Cedric Voets, Cracked.com
Reply With Quote
  #5  
Old 11-03-2017, 01:12 PM
Optimus Prime's Avatar
Optimus Prime Optimus Prime is offline
Member
 
Join Date: Nov 2002
Posts: 1,001
Default

Quote:
Originally Posted by Bicycle Repair Man View Post
I think most people have no idea how smart AI is already. There's nothing magical about the way brains work that can't be emulated on a computer eventually. It might be longer than 30 years away, but definitely less than 100.
Under this thinking, there is no such thing as conscientiousness. Everything is just a layers of statistical model evaluations that can be replicated by a computer?
Reply With Quote
  #6  
Old 11-03-2017, 01:23 PM
Sredni Vashtar's Avatar
Sredni Vashtar Sredni Vashtar is offline
Member
 
Join Date: Mar 2010
Favorite beer: pilseners
Posts: 10,397
Blog Entries: 1
Default

Quote:
Originally Posted by Optimus Prime View Post
Under this thinking, there is no such thing as conscientiousness. Everything is just a layers of statistical model evaluations that can be replicated by a computer?
A brain is a computer, and consciousness is its function.
__________________
L’humour est la politesse du désespoir
Reply With Quote
  #7  
Old 11-03-2017, 02:31 PM
QMO's Avatar
QMO QMO is offline
Member
Non-Actuary
 
Join Date: Jan 2005
Location: Iowa
Studying for life
College: Several, teacher and student
Favorite beer: I like milk (2% preferred).
Posts: 15,649
Default

Quote:
Originally Posted by Sredni Vashtar View Post
...We are just animals with a lot computing power...
No.

Talking, tools, opposable thumbs, etc., are minor.

Moral agency, on the other hand, is neither minor nor quantitative.
__________________
End of line.
Reply With Quote
  #8  
Old 11-03-2017, 02:39 PM
Vorian Atreides's Avatar
Vorian Atreides Vorian Atreides is offline
Wiki/Note Contributor
CAS
 
Join Date: Apr 2005
Location: As far as 3 cups of sugar will take you
Studying for CSPA
College: Hard Knocks
Favorite beer: Most German dark lagers
Posts: 68,884
Default

I'm waiting for the artificial intelligence that can replicate irrational behavior . . . or autonomously generate its own set of moral codes . . .
__________________
I find your lack of faith disturbing

Why should I worry about dying? It’s not going to happen in my lifetime!


Freedom of speech is not a license to discourtesy

#BLACKMATTERLIVES
Reply With Quote
  #9  
Old 11-03-2017, 02:52 PM
DiscreteAndDiscreet DiscreteAndDiscreet is offline
Member
AAA
 
Join Date: May 2016
Posts: 478
Default

Quote:
Originally Posted by Sredni Vashtar View Post
I'm not really sure what you mean by this. The Singularity is usually about "Moore's Law". I think Moore's Law is slowing down a bit and will slow down more when it runs into Quantum Mechanics. But I don't think it will suddenly stop. I think we will press on, despite the hard road, and we will make it well past the point where Mother Nature stopped improving Her transistors. Because we won't have to worry about things like power-consumption.


Yeah, I don't know about the self sustaining stuff. When I think of the Singularity I think of designing machines that are 100x smarter than people. Not because of some self-sustaining miracle feedback loop, but because our own tendency to make computers faster and faster.

It seems like once computers pass us-- at anything-- they immediately transcend us 100x over.


I don't see why "simulating a human brain" would be the only way to get generalized intelligence. Humans are not "magical", either. We are just animals with a lot computing power. It is really just a freak accident of nature that we can talk. If natural selection is capable of making that leap, out of total randomness, don't you think Google will be able to do the same?
Both ability to dissipate heat out of a computing element and I/O rates to a computing element shrink as component size decreases. Power consumption per operation is another item that has physical constraints. These issues are thermodynamic in nature. The I/O rates constraint means that after a certain point, making computing elements smaller only improves performance on problems that have a high enough ratio of computation steps to external data reads.

Even if we suppose that we are just going to use a (physically) big computer to solve a problem, the problems we want to solve have their own constraints. The amount of computation needed to improve a current best solution in an optimization problem can be disproportionate with the economic benefit of the amount of improvement. It's generally not optimal to try to optimize everything.

100x more computation effort thrown at any particular problem currently handled by humans will sometimes have impressive results and sometimes it won't. This is just looking at the "effort" element. Adding more effort is a dominating strategy for certain problems. These problems (1) have very strong strategies (meaning both rules driving actions and data sets used as inputs) that can be developed from brute force calculations, (2) can have data generated for study or there is an existing data set too large for a human to study, and (3) the rules of the problem are essentially fixed. Chess and the recent news about Elon Musk's DOTA bot fit this mold. Driving a car mostly involves these types of problems.

Research and every day economic and social problems don't match up with this as well. Increasing computation may increase the rate of diffusion of ideas across different fields, but there are still elements of new discoveries being essentially reached by making random modifications to an existing body of knowledge. Economic and social problems have strong constraints on the amount of data available for study. One year worth of data is generated per year. There are opportunities to increase efficiency but I only expect this to improve things up to a point determined by some non-computational constraint.

Maybe eventually you can talk about restructuring society to better take advantage of greater computing power. That is potentially possible but it's not something that follows automatically from current trends. It's not clear how large any barriers that need to be crossed would be.
Reply With Quote
  #10  
Old 11-03-2017, 02:54 PM
DiscreteAndDiscreet DiscreteAndDiscreet is offline
Member
AAA
 
Join Date: May 2016
Posts: 478
Default

Quote:
Originally Posted by Vorian Atreides View Post
I'm waiting for the artificial intelligence that can replicate irrational behavior . . . or autonomously generate its own set of moral codes . . .
I expect human behavior to be metarational (rational up to the point that being more rational is too time consuming to be worthwhile) and I'm usually not surprised by people's actions.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 06:37 AM.


Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 1.06200 seconds with 11 queries