Difference between revisions of "Articles on AI and ethics/society"

From The Artificial Intelligence and Society discussion group
Jump to: navigation, search
(Articles relating to bias / discrimination in AI)
m (Articles relating to bias / discrimination in AI)
 
(9 intermediate revisions by one user not shown)
Line 54: Line 54:
 
== Articles relating to bias / discrimination in AI ==
 
== Articles relating to bias / discrimination in AI ==
  
* A study from Princeton/Bath demonstrating many biases in language corpora taken from the web: [http://science.sciencemag.org/content/356/6334/183.full Semantics derived automatically from language corpora contain human-like biases] (Caliskan, Bryson and Narayanan, April 2017). Here's a [https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals Guardian article summarising the study].
+
Some articles about a study from Princeton/Bath demonstrating many biases in language corpora taken from the web:
  
Some articles from [https://www.propublica.org/ ProPublica's] study on [https://www.propublica.org/series/machine-bias machine bias]:
+
* The original study: [http://science.sciencemag.org/content/356/6334/183.full Semantics derived automatically from language corpora contain human-like biases] (Caliskan, Bryson and Narayanan, April 2017).
 +
* A [https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals Guardian article summarising the study].
  
* [https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say] (Angwin and Larson, ProPublica 2016). There's more detail on the data analysis for this study [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm here].
+
<br>
 +
Some articles/data relating to [https://www.propublica.org/ ProPublica's] study on [https://www.propublica.org/series/machine-bias machine bias] in the COMPAS algorithm for predicting recidivism:
 +
 
 +
* [https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say] (Angwin and Larson, ProPublica 2016).  
 +
* There's more detail on the data analysis for this study [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm here].
 +
* A [https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?utm_term=.27d5f7acdcab Washington Post blog article] about the ProPublica study, by Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel.
 +
* A sample of [https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html COMPAS's risk assessment questionnaire] (unconfirmed).
 +
* A [https://socserv.socsci.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf primer on Cox regression], the statistical paradigm used by COMPAS (with worked examples in R), by John Fox and Sanford Weisberg (Sage Publishing 2011)
 +
 
 +
<br>
 +
Articles on uses of AI in criminal justice:
 +
 
 +
* Jason Tashea: [https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/ Courts are using AI to sentence criminals. That must stop now]. (Wired, April 2017)
 +
* An interesting symposium organised by Jason Tashea: [http://www.justicecodes.org/symposium/ Justice Codes] (October 2016)
 +
* A useful review article on [https://epic.org/algorithmic-transparency/crim-justice/ Algorithms in the Criminal Justice System] (including a good bibliography of academic articles) by the [https://epic.org/ Electronic Privacy Information Center] - an appendix to their book 'An R Companion to Applied Regression' (2011)
 +
* Goel, Rao and Shroff's paper about a proposed method for reducing police bias in stop-and-search events: [https://5harad.com/papers/risky.pdf Personalized Risk Assessments in the Criminal Justice System] (American Economic Review, 2016)
  
 
== Discussions of the impact on people of AI ==  
 
== Discussions of the impact on people of AI ==  

Latest revision as of 16:46, 8 September 2017

Contents

Media reports and opinions about future AI

Academic reports about future AI

Business / economic reports about AI

Academic discussions about the likely impact of AI on jobs




  • The UN International Labour Organisation's 2016 report on the impact of automation on jobs in South-East Asia: ASEAN in transformation. This report estimates that 56% of the total workforce of ASEAN countries are at risk of displacement by robots.
  • A study from the US National Bureau of Economic Research (Acemoglu and Restrepo, Working Paper No. 23285, 2017), Robots and Jobs: Evidence from US Labor markets. This reports a regression analysis that looks at the influence of robots on unemployment levels and wages. In the US areas studied, the authors estimate that 'one more robot per thousand workers reduces the employment to population ratio by about 0.18-0.34 percentage points and wages by 0.25-0.5 percent' - and that these influences are distinct from the impact of imports from China and Mexico, the decline of routine jobs, offshoring, and several other factors. (Here's an article on the study in the New York Times.)


Some links from Murat Ungor on AI and employment:

Articles relating to regulation of AI

Articles relating to bias / discrimination in AI

Some articles about a study from Princeton/Bath demonstrating many biases in language corpora taken from the web:


Some articles/data relating to ProPublica's study on machine bias in the COMPAS algorithm for predicting recidivism:


Articles on uses of AI in criminal justice:

Discussions of the impact on people of AI

AI and legal issues

  • An article about possible EU legislation to classify advanced robots as 'electronic persons' (June 2016)

NZ initiatives

Recent initiatives relating to the future of AI