Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amount of resources required to run AI research have already made it a deep pocket game. Current AI research is a big corporate game. You will neither have compute nor data to do anything original unless it fall under transfer learning and somebody is kind enough to share pretrained model.

I am not sure how setting the right principal which every one should follow will put it out of your reach. I think it will force the corporates to avoid taking unnecessary risks with something we don't understand.

Perhaps a much more thought through version of Asimov's 3 laws of robotics.



> You will neither have compute nor data to do anything original unless it fall under transfer learning and somebody is kind enough to share pretrained model.

Lots of research recently goes into doing something useful with less data and compute. E.g. there have been a lot of zero/few-shot learning on the last CVPR.


> Amount of resources required to run AI research have already made it a deep pocket game.

there's no shortage of existing AI techniques which don't require google*days of cpu power, and there's no reason to believe research on all of them is completely tapped out.


Agreed and I am not arguing to regulate the obvious but now as AI can potentially touch and transform every walk of life, guidelines are needed to keep big players from going too far. There won't be any ground for plausible deniability if principals are set.

For example I think when deep learning is bound to touch every aspect of our lives, explainable models should be a must[1]. Look at section 3.5 where classifier was predicting with 94% accuracy for completely unrelated reasons like a person's name.

>Although this classifier achieves 94% held-out accuracy, and one would be tempted to trust it based on this, the explanation for an instance shows that predictions are made for quite arbitrary reasons (words “Posting”, “Host”, and “Re” have no connection to either Christianity or Atheism). The word “Posting” appears in 22% of examples in the training set, 99% of them in the class “Atheism”. Even if headers are removed, proper names of prolific posters in the original newsgroups are selected by the classifier, which would also not generalize.

If allowed industry will choose to ignore the downside in a race to capture market. For example industry is trying to play catchup in the area of security only after it became a serious issues. Its not that principals were not known, it's just that industry choose to ignore them[2]. If same happens to AI, it could result is widespread loss of human life. Tesla death is a perfect example of this where autopilot simply failed to distinguish between white trailer and brightly lit sky[3].

[1] https://arxiv.org/pdf/1602.04938.pdf

[2] http://csrc.nist.gov/nissc/1998/proceedings/paperF1.pdf

[3] https://www.theguardian.com/technology/2016/jun/30/tesla-aut...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: