AI for the People
Doctoral School of Engineering and Science at Aalborg University
The notion of Artificial Intelligence (AI) dates back approx. 70 years as a research field and even longer if one considers fiction writers. A number of different definitions of AI has been suggested over the years, but none seem to capture what AI is. This might be due to the fact that AI is about computer algorithms that behave intelligently. And since the capabilities of computer algorithms improve over time, no static definition is possible.
One aspect of AI is the ability to learn or adapt dynamically. This concept has inspired numerous Sci-fi books and movies with the underlying theme of man vs AI (often manifested in a robot). From this follows naturally ethical and regulatory considerations. But until recently, such considerations (see for example the three Robotic laws defined by the sci-fi writer I. Asimov) have been speculative since current AI algorithms (and their manifestation in mechanical devices) have performed poorly and hence never left university labs around the world. Recently, however, fast hardware and massive amount of data have allowed revisiting one particular AI algorithm invented in the 80s, namely Artificial Neural Networks (ANN), and increasing the size of the networks used in these models. This was exemplified via image processing for recognizing hand-written digits and resulted in amazing results. Inspired by this success ANN (now known as Deep Learning (DL)) was quickly picked up by other research fields where similar successes have been witnessed.
DL algorithms can now outperform humans on a number of tasks. Moreover, they can, to a certain degree, learn new tasks. An important point in this regard is that the algorithm is so complex that it is next to impossible to understand its inner workings. So, we seem to be facing a reality where AI, in a not too distant future, will be used to make decisions (simply because it is of better than humans). This raises a number of ethical and regulative questions such as, for instance, 1) how we ensure that AI systems are not discriminating against certain groups in the population, 2) how do we ensure transparency about the decisions made by AI systems, and relatedly 3) could and should individuals be given a substantial right to an explanation of decisions made by such systems and a substantial right not to be subjected to automated decision-making (GDPR). Since many of the currently developed AI systems operate on the basis of large amounts of data, the development and use of such systems also reinvigorate the ethical issues related to ‘Big data’. Finally, there are problems related to the efficacy and safety of AI systems. This raises questions not only of how appropriate monitoring of the development of these systems can be secured, but also and more importantly about the appropriate domains for use.
For additional information, updates, and registration, please refer to AAU PhDMoodle via the link provided on the right side of this page.