"This vital and erudite work of scholarship provides a lucid account of how artificial intelligence works, illuminating both the deepest fears of AI’s Cassandras and the wildest hopes of its Pollyannas. It will be an essential resource for anyone serious about understanding both the risks and opportunities of the AI revolution. The book provides a comprehensive and insightful overview of rapidly developing fields, explaining technical issues with engaging clarity. Specialists will value the meticulous detail and rigour while general readers will appreciate the rich and concise overviews. It makes clear the complexity of challenges like algorithmic bias, AI ethics and privacy, but also reviews promising approaches like explainable AI and artificial emotion. The intriguing exercises at the end of each section will inspire anyone teaching or studying Human-AI interaction. Whether exploring probabilistic reasoning or the philosophy of consciousness, the authors are sure and helpful guides. This is everything you wanted to know about AI but were afraid to ask for fear of revealing your shameful ignorance." --Mark Blythe, Professor of Design and Creative Lead for AI, Northumbria University, UK

An authoritative and accessible one-stop resource, the first edition of An Introduction to Artificial Intelligence presented the first full examination of AI. Designed to provide an understanding of the foundations of artificial intelligence, it examined the central computational techniques employed by AI, including knowledge representation, search, reasoning, and learning, as well as the principal application domains of expert systems, natural language, vision, robotics, software agents and cognitive modeling. Many of the major philosophical and ethical issues of AI were also introduced. This new edition expands and revises the book throughout, with new material added to existing chapters, including short case studies, as well as adding new chapters on explainable AI, and big data. It expands the book’s focus on human-centred AI, covering bias (gender, ethnic), the need for transparency, augmentation vs replacement, IUI, and designing interactions to aid ML. With detailed, well-illustrated examples and exercises throughout, this book provides a substantial and robust introduction to artificial intelligence in a clear and concise coursebook form. It stands as a core text for all students and computer scientists approaching AI.
Les mer
This new edition expands and revises the book throughout, with new material added to existing chapters, including short case studies, as well as adding new chapters on explainable AI, and big data.
List of Figures xxvPreface xxxvAuthor Bio xxxviiChapter 1 ■ Introduction 11.1 WHAT IS ARTIFICIAL INTELLIGENCE? 11.1.1 How much like a human: strong vs. weak AI 11.1.2 Top-down or bottom-up: symbolic vs. sub-symbolic 21.1.3 A working definition 31.1.4 Human intelligence 31.1.5 Bottom up and top down 41.2 HUMANS AT THE HEART 41.3 A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE 51.3.1 The development of AI 61.3.2 The physical symbol system hypothesis 81.3.3 Sub-symbolic spring 91.3.4 AI Renaissance 101.3.5 Moving onwards 111.4 STRUCTURE OF THIS BOOK – A LANDSCAPE OF AI 11Section I Knowledge-Rich AIChapter 2 ■ Knowledge in AI 152.1 OVERVIEW 152.2 INTRODUCTION 152.3 REPRESENTING KNOWLEDGE 162.4 METRICS FOR ASSESSING KNOWLEDGE REPRESENTATION SCHEMES192.5 LOGIC REPRESENTATIONS 202.6 PROCEDURAL REPRESENTATION 23viiviii ■ Contents2.6.1 The database 232.6.2 The production rules 232.6.3 The interpreter 242.6.4 An example production system: making a loan 242.7 NETWORK REPRESENTATIONS 262.8 STRUCTURED REPRESENTATIONS 282.8.1 Frames 292.8.2 Scripts 292.9 GENERAL KNOWLEDGE 312.10 THE FRAME PROBLEM 322.11 KNOWLEDGE ELICITATION 332.12 SUMMARY 33Chapter 3 ■ Reasoning 373.1 OVERVIEW 373.2 WHAT IS REASONING? 373.3 FORWARD AND BACKWARD REASONING 393.4 REASONING WITH UNCERTAINTY 403.4.1 Non-monotonic reasoning 403.4.2 Probabilistic reasoning 413.4.3 Certainty factors 433.4.4 Fuzzy reasoning 453.4.5 Reasoning by analogy 463.4.6 Case-based reasoning 463.5 REASONING OVER NETWORKS 483.6 CHANGING REPRESENTATIONS 513.7 SUMMARY 51Chapter 4 ■ Search 534.1 INTRODUCTION 534.1.1 Types of problem 534.1.2 Structuring the search space 574.2 EXHAUSTIVE SEARCH AND SIMPLE PRUNING 634.2.1 Depth and breadth first search 634.2.2 Comparing depth and breadth first searches 654.2.3 Programming and space costs 674.2.4 Iterative deepening and broadening 68Contents ■ ix4.2.5 Finding the best solution – branch and bound 694.2.6 Graph search 704.3 HEURISTIC SEARCH 704.3.1 Hill climbing andbest first – goal-directed search 724.3.2 Finding the best solution – the A∗ algorithm 724.3.3 Inexact search 754.4 KNOWLEDGE-RICH SEARCH 774.4.1 Constraint satisfaction 784.5 SUMMARY 80Section II Data and LearningChapter 5 ■ Machine learning 855.1 OVERVIEW 855.2 WHY DO WE WANT MACHINE LEARNING? 855.3 HOW MACHINES LEARN 875.3.1 Phases of machine learning 875.3.2 Rote learning and the importance of generalization 895.3.3 Inputs to training 905.3.4 Outputs of training 915.3.5 The training process 925.4 DEDUCTIVE LEARNING 935.5 INDUCTIVE LEARNING 945.5.1 Version spaces 955.5.2 Decision trees 995.5.2.1 Building a binary tree 995.5.2.2 More complex trees 1025.5.3 Rule induction and credit assignment 1035.6 EXPLANATION-BASED LEARNING 1045.7 EXAMPLE: QUERY-BY-BROWSING 1055.7.1 What the user sees 1055.7.2 How it works 1055.7.3 Problems 1075.8 SUMMARY 107Chapter 6 ■ Neural Networks 1096.1 OVERVIEW 109x ■ Contents6.2 WHY USE NEURAL NETWORKS? 1096.3 THE PERCEPTRON 1106.3.1 The XOR problem 1126.4 THE MULTI-LAYER PERCEPTRON 1136.5 BACKPROPAGATION 1146.5.1 Basic principle 1156.5.2 Backprop for a single layer network 1166.5.3 Backprop for hidden layers 1176.6 ASSOCIATIVE MEMORIES 1176.6.1 Boltzmann Machines 1196.6.2 Kohonen self-organizing networks 1216.7 LOWER-LEVEL MODELS 1226.7.1 Cortical layers 1226.7.2 Inhibition 1236.7.3 Spiking neural networks 1236.8 HYBRID ARCHITECTURES 1246.8.1 Hybrid layers 1246.8.2 Neurosymbolic AI 1256.9 SUMMARY 126Chapter 7 ■ Statistical and Numerical Techniques 1297.1 OVERVIEW 1297.2 LINEAR REGRESSION 1297.3 VECTORS AND MATRICES 1327.4 EIGENVALUES AND PRINCIPAL COMPONENTS 1347.5 CLUSTERING AND K-MEANS 1367.6 RANDOMNESS 1387.6.1 Simple statistics 1387.6.2 Distributions and long-tail data 1407.6.3 Least squares 1427.6.4 Monte Carlo techniques 1427.7 NON-LINEAR FUNCTIONS FOR MACHINE LEARNING 1447.7.1 Support Vector Machines 1447.7.2 Reservoir Computing 1457.7.3 Kolmogorov-Arnold Networks 1467.8 SUMMARY 147Contents ■ xiChapter 8 ■ Going Large: deep learning and big data 1518.1 OVERVIEW 1518.2 DEEP LEARNING 1528.2.1 Why are many layers so difficult? 1538.2.2 Architecture of the layers 1538.3 GROWING THE DATA 1568.3.1 Modifying real data 1578.3.2 Virtual worlds 1578.3.3 Self learning 1578.4 DATA REDUCTION 1588.4.1 Dimension reduction 1598.4.1.1 Vector space techniques 1598.4.1.2 Non-numeric features 1608.4.2 Reduce total number of data items 1618.4.2.1 Sampling 1618.4.2.2 Aggregation 1618.4.3 Segmentation 1628.4.3.1 Class segmentation 1628.4.3.2 Result recombination 1628.4.3.3 Weakly-communicating partial analysis 1638.5 PROCESSING BIG DATA 1648.5.1 Why it is hard – distributed storage and computation 1648.5.2 Principles behind MapReduce 1658.5.3 MapReduce for the cloud 1668.5.4 If it can go wrong – resilience for big processing 1678.6 DATA AND ALGORITHMS AT SCALE 1698.6.1 Big graphs 1698.6.2 Time series and event streams 1708.6.2.1 Multi-scale with mega-windows 1708.6.2.2 Untangling streams 1718.6.2.3 Real-time processing 1718.7 SUMMARY 171Chapter 9 ■ Making Sense of Machine Learning 1759.1 OVERVIEW 1759.2 THE MACHINE LEARNING PROCESS 175xii ■ Contents9.2.1 Training phase 1769.2.2 Application phase 1779.2.3 Validation phase 1779.3 EVALUATION 1789.3.1 Measures of effectiveness 1789.3.2 Precision–recall trade-off 1809.3.3 Data for evaluation 1829.3.4 Multi-stage evaluation 1829.4 THE FITNESS LANDSCAPE 1839.4.1 Hill-climbing and gradient descent / ascent 1839.4.2 Local maxima and minima 1849.4.3 Plateau and ridge effects 1859.4.4 Local structure 1869.4.5 Approximating the landscape 1869.4.6 Forms of fitness function 1879.5 DEALING WITH COMPLEXITY 1889.5.1 Degrees of freedom and dimension reduction 1889.5.2 Constraints and dependent features 1899.5.3 Continuity and learning 1919.5.4 Multi-objective optimisation 1939.5.5 Partially labelled data 1949.6 SUMMARY 196Chapter 10 ■Data Preparation 19910.1 OVERVIEW 19910.2 STAGES OF DATA PREPARATION 19910.3 CREATING A DATASET 20010.3.1 Extraction and gathering of data 20010.3.2 Entity reconciliation and linking 20110.3.3 Exception sets 20210.4 MANIPULATION AND TRANSFORMATION OF DATA 20210.4.1 Types of data value 20310.4.2 Transforming to the right kind of data 20410.5 NUMERICAL TRANSFORMATIONS 20510.5.1 Information 20510.5.2 Normalising data 207Contents ■ xiii10.5.3 Missing values – filling the gaps 20710.5.4 Outliers – dealing with extremes 20910.6 NON-NUMERIC TRANSFORMATIONS 21110.6.1 Media data 21110.6.2 Text 21210.6.3 Structure transformation 21410.7 AUTOMATION AND DOCUMENTATION 21410.8 SUMMARY 216Section III Specialised AreasChapter 11 ■Game playing 22111.1 OVERVIEW 22111.2 INTRODUCTION 22111.3 CHARACTERISTICS OF GAME PLAYING 22311.4 STANDARD GAMES 22511.4.1 A simple game tree 22511.4.2 Heuristics and minimax search 22511.4.3 Horizon problems 22711.4.4 Alpha–beta pruning 22811.4.5 The imperfect opponent 22911.5 NON-ZERO-SUM GAMES AND SIMULTANEOUS PLAY 22911.5.1 The prisoner’s dilemma 23011.5.2 Searching the game tree 23011.5.3 No alpha–beta pruning 23211.5.4 Pareto-optimality 23211.5.5 Multi-party competition and co-operation 23311.6 THE ADVERSARY IS LIFE! 23311.7 PROBABILITY 23511.8 NEURAL NETWORKS FOR GAMES 23611.8.1 Where to use a neural network 23611.8.2 Training data and self play 23811.9 SUMMARY 238Chapter 12 ■Computer vision 24312.1 OVERVIEW 24312.2 INTRODUCTION 243xiv ■ Contents12.2.1 Why computer vision is difficult 24312.2.2 Phases of computer vision 24412.3 DIGITIZATION AND SIGNAL PROCESSING 24512.3.1 Digitizing images 24512.3.2 Thresholding 24612.3.3 Digital filters 24812.3.3.1 Linear filters 24912.3.3.2 Smoothing 24912.3.3.3 Gaussian filters 25112.3.3.4 Practical considerations 25212.4 EDGE DETECTION 25212.4.1 Identifying edge pixels 25312.4.1.1 Gradient operators 25312.4.1.2 Robert’s operator 25312.4.1.3 Sobel’s operator 25612.4.1.4 Laplacian operator 25712.4.1.5 Successive refinement and Marr’s primal sketch 25812.4.2 Edge following 25912.5 REGION DETECTION 26012.5.1 Region growing 26112.5.2 The problem of texture 26112.5.3 Representing regions – quadtrees 26212.5.4 Computational problems 26312.6 RECONSTRUCTING OBJECTS 26312.6.1 Inferring three-dimensional features 26312.6.1.1 Problems with labelling 26612.6.2 Using properties of regions 26712.7 IDENTIFYING OBJECTS 26912.7.1 Using bitmaps 26912.7.2 Using summary statistics 27012.7.3 Using outlines 27112.7.4 Using paths 27212.8 FACIAL AND BODY RECOGNITION 27312.9 NEURAL NETWORKS FOR IMAGES 27612.9.1 Convolutional neural networks 27612.9.2 Autoencoders 277Contents ■ xv12.10 GENERATIVE ADVERSARIAL NETWORKS 27912.10.1 Generated data 27912.10.2 Diffusion models 28012.10.3 Bottom-up and top-down processing 28112.11 MULTIPLE IMAGES 28112.11.1 Stereo vision 28212.11.2 Moving pictures 28412.12 SUMMARY 285Chapter 13 ■Natural language understanding 28913.1 OVERVIEW 28913.2 WHAT IS NATURAL LANGUAGE UNDERSTANDING? 28913.3 WHY DO WE NEED NATURAL LANGUAGE UNDERSTANDING? 29013.4 WHY IS NATURAL LANGUAGE UNDERSTANDING DIFFICULT? 29013.5 AN EARLY ATTEMPT AT NATURAL LANGUAGE UNDERSTANDING:SHRDLU 29213.6 HOW DOES NATURAL LANGUAGE UNDERSTANDING WORK? 29313.7 SYNTACTIC ANALYSIS 29513.7.1 Grammars 29613.7.2 An example: generating a grammar fragment 29713.7.3 Transition networks 29913.7.4 Context-sensitive grammars 30213.7.5 Feature sets 30313.7.6 Augmented transition networks 30413.7.7 Taggers 30413.8 SEMANTIC ANALYSIS 30513.8.1 Semantic grammars 30613.8.1.1 An example: a database query interpreter revisited 30613.8.2 Case grammars 30713.9 PRAGMATIC ANALYSIS 31013.9.1 Speech acts 31113.10 GRAMMAR-FREE APPROACHES 31113.10.1 Template matching 31113.10.2 Keyword matching 31213.10.3 Predictive methods 31213.10.4 Statistical methods 31313.11 SUMMARY 314xvi ■ Contents13.12 SOLUTION TO SHRDLU PROBLEM 315Chapter 14 ■Time Series and Sequential Data 31714.1 OVERVIEW 31714.2 GENERAL PROPERTIES 31714.2.1 Kinds of temporal and sequential data 31714.2.2 Looking through time 31814.2.3 Processing temporal data 32014.2.3.1 Windowing 32014.2.3.2 Hidden state 32114.2.3.3 Non-time domain transformations 32114.3 PROBABILITY MODELS 32214.3.1 Markov Model 32314.3.2 Higher-order Markov Model 32414.3.3 Hidden Markov Model 32614.4 GRAMMAR AND PATTERN-BASED APPROACHES 32714.4.1 Regular expressions 32714.4.2 More complex grammars 32814.5 NEURAL NETWORKS 32914.5.1 Window-based methods 32914.5.2 Recurrent Neural Networks 33114.5.3 Long-term short-term memory networks 33214.5.4 Transformer models 33214.6 STATISTICAL AND NUMERICAL TECHNIQUES 33214.6.1 Simple data cleaning techniques 33314.6.2 Logarithmic transformations and exponential growth 33414.6.3 ARMA models 33514.6.4 Mixed statistics/ML models 33614.7 MULTI-STAGE/SCALE 33714.8 SUMMARY 339Chapter 15 ■Planning and robotics 34315.1 OVERVIEW 34315.2 INTRODUCTION 34315.2.1 Friend or foe? 34315.2.2 Different kinds of robots 34415.3 GLOBAL PLANNING 345Contents ■ xvii15.3.1 Planning actions – means–ends analysis 34515.3.2 Planning routes – configuration spaces 34815.4 LOCAL PLANNING 35015.4.1 Local planning and obstacle avoidance 35015.4.2 Finding out about the world 35315.5 LIMBS, LEGS AND EYES 35615.5.1 Limb control 35615.5.2 Walking – on one, two or more legs 35915.5.3 Active vision 36115.6 PRACTICAL ROBOTICS 36315.6.1 Controlling the environment 36315.6.2 Safety and hierarchical control 36415.7 SUMMARY 365Chapter 16 ■Agents 36916.1 OVERVIEW 36916.2 SOFTWARE AGENTS 36916.2.1 The rise of the agent 37016.2.2 Triggering actions 37116.2.3 Watching and learning 37216.2.4 Searching for information 37416.3 REINFORCEMENT LEARNING 37616.3.1 Single step learning 37616.3.2 Choices during learning 37816.3.3 Intermittent rewards and credit assignment 37916.4 COOPERATING AGENTS AND DISTRIBUTED AI 37916.4.1 Blackboard architectures 38016.4.2 Distributed control 38216.5 LARGER COLLECTIVES 38316.5.1 Emergent behaviour 38316.5.2 Cellular automata 38416.5.3 Artificial life 38416.5.4 Swarm computing 38516.5.5 Ensemble methods 38616.6 SUMMARY 388Chapter 17 ■Web scale reasoning 391xviii ■ Contents17.1 OVERVIEW 39117.2 THE SEMANTIC WEB 39117.2.1 Representing knowledge – RDF and triples 39217.2.2 Ontologies 39417.2.3 Asking questions – SPARQL 39517.2.4 Talking about RDF – reification, named graphs and provenance39617.2.5 Linked data – connecting the Semantic Web 39817.3 MINING THE WEB: SEARCH AND SEMANTICS 40217.3.1 Search words and links 40217.3.2 Explicit markup 40317.3.3 External semantics 40517.4 USING WEB DATA 40817.4.1 Knowledge-rich applications 40817.4.2 The surprising power of big data 40917.5 THE HUMAN WEB 41217.5.1 Recommender systems 41217.5.2 Crowdsourcing and human computation 41417.5.3 Social media as data 41617.6 SUMMARY 417Section IV Humans at the HeartChapter 18 ■Expert and decision support systems 42118.1 OVERVIEW 42118.2 INTRODUCTION – EXPERTS IN THE LOOP 42118.3 EXPERT SYSTEMS 42218.3.1 Uses of expert systems 42318.3.2 Architecture of an expert system 42518.3.3 Explanation facility 42518.3.4 Dialogue and UI component 42718.3.5 Examples of four expert systems 42818.3.5.1 Example 1: MYCIN 42818.3.5.2 Example 2: PROSPECTOR 42918.3.5.3 Example 3: DENDRAL 42918.3.5.4 Example 4: XCON 43018.3.6 Building an expert system 430Contents ■ xix18.3.7 Limitations of expert systems 43118.4 KNOWLEDGE ACQUISITION 43118.4.1 Knowledge elicitation 43218.4.1.1 Unstructured interviews. 43218.4.1.2 Structured interviews. 43318.4.1.3 Focused discussions. 43318.4.1.4 Role reversal. 43318.4.1.5 Think-aloud. 43318.4.2 Knowledge Representation 43418.4.2.1 Expert system shells 43418.4.2.2 High-level programming languages 43418.4.2.3 Ontologies 43418.4.2.4 Selecting a tool 43518.5 EXPERTS AND MACHINE LEARNING 43618.5.1 Knowledge elicitation for ML 43818.5.1.1 Acquiring tacit knowledge 43818.5.1.2 Feature selection 43818.5.1.3 Expert labelling 43818.5.1.4 Iteration and interaction 43918.5.2 Algorithmic choice, validation and explanation 43918.6 DECISION SUPPORT SYSTEMS. 44118.6.1 Visualisation 44218.6.2 Data management and analysis 44318.6.3 Visual Analytics 44418.6.3.1 Visualisation in VA 44518.6.3.2 Data management and analysis for VA 44618.7 STEPPING BACK 44718.7.1 Who is it about? 44718.7.2 Why are we doing it? 44718.7.3 Wider context 44918.7.4 Cost–benefit balance 45018.8 SUMMARY 451Chapter 19 ■AI working with and for humans 45519.1 OVERVIEW 45519.2 INTRODUCTION 455xx ■ Contents19.3 LEVELS AND TYPES OF HUMAN CONTACT 45719.3.1 Social scale 45719.3.2 Visibility and embodiment 45819.3.3 Intentionality 45819.3.4 Who is in control 45919.3.5 Levels of automation 46019.4 ON A DEVICE – INTELLIGENT USER INTERFACES 46219.4.1 Low-level input 46219.4.2 Conversational user interfaces 46219.4.3 Predicting what next 46419.4.4 Finding and managing information 46419.4.5 Helping with tasks 46619.4.6 Adaptation and personalisation 46719.4.7 Going small 46819.5 IN THE WORLD – SMART ENVIRONMENTS 46919.5.1 Configuration 47019.5.2 Sensor fusion 47019.5.3 Context and activity 47219.5.4 Designing for uncertainty in sensor-rich smart environments 47319.5.5 Dealing with hiddenness – a central heating controller 47419.6 DESIGNING FOR AI–HUMAN INTERACTION 47619.6.1 Appropriate intelligence – soft failure 47619.6.2 Feedback – error detection and repair 47719.6.3 Decisions and suggestions 47819.6.4 Case study: OnCue – appropriate intelligence by design 48019.7 TOWARDS HUMAN–MACHINE SYNERGY 48119.7.1 Tuning AI algorithms for interaction 48119.7.2 Tuning interaction for AI 48219.8 SUMMARY 483Chapter 20 ■When things go wrong 48720.1 OVERVIEW 48720.2 INTRODUCTION 48720.3 WRONG ON PURPOSE? 48820.3.1 Intentional bad use 48820.3.2 Unintentional problems 489Contents ■ xxi20.4 GENERAL STRATEGIES 49020.4.1 Transparency and trust 49020.4.2 Algorithmic accountability 49120.4.3 Levels of opacity 49220.5 SOURCES OF ALGORITHMIC BIAS 49320.5.1 What is bias? 49320.5.2 Stages in machine learning 49420.5.3 Bias in the training data 49420.5.4 Bias in the objective function 49720.5.5 Bias in the accurate result 49820.5.6 Proxy measures 49920.5.7 Input feature choice 50020.5.8 Bias and human reasoning 50020.5.9 Avoiding bias 50120.6 PRIVACY 50220.6.1 Anonymisation 50220.6.2 Obfuscation 50320.6.3 Aggregation 50320.6.4 Adversarial privacy 50420.6.5 Federated learning 50420.7 COMMUNICATION, INFORMATION AND MISINFORMATION 50520.7.1 Social media 50520.7.2 Deliberate misinformation 50620.7.3 Filter bubbles 50720.7.4 Poor information 50720.8 SUMMARY 508Chapter 21 ■Explainable AI 51321.1 OVERVIEW 51321.2 INTRODUCTION 51321.2.1 Why we need explainable AI 51421.2.2 Is explainable AI possible? 51521.3 AN EXAMPLE – QUERY-BY-BROWSING 51521.3.1 The problem 51621.3.2 A solution 51621.3.3 How it works 517xxii ■ Contents21.4 HUMAN EXPLANATION – SUFFICIENT REASON 51821.5 LOCAL AND GLOBAL EXPLANATIONS 51921.5.1 Decision trees – easier explanations 51921.5.2 Black-box – sensitivity and perturbations 52021.6 HEURISTICS FOR EXPLANATION 52221.6.1 White-box techniques 52321.6.2 Black-box techniques 52421.6.3 Grey-box techniques 52621.7 SUMMARY 529Chapter 22 ■Models of the mind – Human-Like Computing 53322.1 OVERVIEW 53322.2 INTRODUCTION 53322.3 WHAT IS THE HUMAN MIND? 53422.4 RATIONALITY 53522.4.1 ACTR 53622.4.2 SOAR 53722.5 SUBCONSCIOUS AND INTUITION 53822.5.1 Heuristics and imagination 53922.5.2 Attention, salience and boredom 53922.5.3 Rapid serial switching 54022.5.4 Disambiguation 54122.5.5 Boredom 54222.5.6 Dreaming 54222.6 EMOTION 54322.6.1 Empathy and theory of mind 54422.6.2 Regret 54622.6.3 Feeling 54822.7 SUMMARY 549Chapter 23 ■Philosophical, ethical and social issues 55323.1 OVERVIEW 55323.2 THE LIMITS OF AI 55323.2.1 Intelligent machines or engineering tools? 55423.2.2 What is intelligence? 55423.2.3 Computational argument vs. Searle’s Chinese Room 55523.3 CREATIVITY 556Contents ■ xxiii23.3.1 The creative process 55723.3.2 Generate and filter 55723.3.3 The critical edge 55823.3.4 Impact on creative professionals 55823.4 CONSCIOUSNESS 55923.4.1 Defining consciousness 55923.4.2 Dualism and materialism 56023.4.3 The hard problem of consciousness 56123.5 MORALITY OF THE ARTIFICIAL 56123.5.1 Morally neutral 56123.5.2 Who is responsible? 56323.5.3 Life or death decisions 56323.5.4 The special ethics of AI 56523.6 SOCIETY AND WORK 56523.6.1 Humanising AI or dehumanising people 56623.6.2 Top-down: algorithms grading students 56623.6.3 Bottom-up: when AI ruled France 56823.6.4 AI and work 56923.7 MONEY AND POWER 57023.7.1 Finance and markets 57123.7.2 Advertising and runaway AI 57223.7.3 Big AI: the environment and social impact 57323.8 SUMMARY 575Section V Looking ForwardChapter 24 ■Epilogue: what next? 58124.1 OVERVIEW 58124.2 CRYSTAL BALL 58124.3 WHAT NEXT: AI TECHNOLOGY 58224.3.1 Bigger and Better 58224.3.2 Smaller and Smarter 58224.3.3 Mix and Match 58424.3.4 Partners with People 58424.4 WHAT NEXT: AI IN THE WORLD 58524.4.1 Friend or Foe? 58524.4.2 Boom then Bust 586xxiv ■ Contents24.4.3 Everywhere and nowhere 58624.5 SUMMARY – FROM HYPE TO HOPE 586Bibliography 589Index
Les mer

Produktdetaljer

ISBN
9780367536879
Publisert
2025-05-21
Utgave
2. utgave
Utgiver
Vendor
Chapman & Hall/CRC
Høyde
280 mm
Bredde
210 mm
Aldersnivå
G, U, UF, 01, 05, 08
Språk
Product language
Engelsk
Format
Product format
Innbundet
Antall sider
704

Forfatter

Om bidragsyterne

Alan Dix is Director of the Computational Foundry at Swansea University, a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit. Previously Alan has worked in a mix of academic, commercial and government roles. Alan is principally known for his work in human-computer interaction, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to intelligent interfaces and design creativity. Technically, he works equally happily with AI and machine learning alongside traditional mathematical and statistical techniques. He has a broad understanding of mathematical, computational and human issues, and he authored some of the earliest papers on gender and ethnic bias in black box-algorithms.