Will a computer take my job?: Archaeology and technological development?

In recent years there’s been a lot of discussion about how soon the computers are going to take over, which jobs will be lost to mechanisation and how we deal with the resulting unemployment and political change. Journalists and think tanks have evoked the spectre of Skynet, the evil defence system from the Terminator franchise, to ask how we deal with the impact of Artificial Intelligence (AI) and increased mechanisation. The BBC even published a handy computerisation checker to see if a robot will take your job over the next 20 years. Some have predicted that in time a large proportion of jobs will be automated, even those that require high skills, compassion or intellect, and that we need to prepare for the effect of this on society with political and economic measures like the citizens’ income.

At present archaeology is unlikely to be automated. It doesn’t appear in the BBC list of professions likely to be automated in the next 20 years. The closest profession to archaeology is ‘Social and humanities scientist’ with a 10.4% probability of automation, a figure low enough to be reassuring. But given the march of technology and the increasing availability of computer programmes for archaeological investigation, many have suggested that even complex jobs like that of an archaeologist will eventually be automated, even if this takes 50 or 100 years.

The idea of automation also has a deep, but often unconscious effect, upon the perception of archaeology amongst both professionals and the public, particularly where archaeologists are making use of highly computerised technologies, such as Geographic Information Systems (GIS), satellite remote sensing, geophysical  analysis,  and others. The perception, perhaps fueled by the way technology is used as a ‘magic box’ in popular culture, is that data goes in and unambiguous archaeological answers come out. This perception is both deeply inaccurate and dangerous for the scientific profession, including the ‘technological archaeologist’. It fosters the idea that answers generated by technology are straightforward and unambiguous, when in reality they are anything but (as is well demonstrated by the debate over the radar scanning of Tutankhamun’s tomb). It also reduces the archaeologist to little more than a ‘data chauffeur’, collecting or loading the data into the programme and then presenting the answer at the end.

While grotesquely devaluing the role of the archaeologist or scientist, it is the latter issue which I believe contributes to the oft-repeated  assertion  that even subtle, nuanced jobs requiring flexibility and creativity are at risk of automation.  After all, if the archaeologist (now downgraded to little more than a technician) need only load the data and present the result at the end, then is that highly educated scientist really doing anything anyone else couldn’t do? Surely as machines get better they’ll be able to load their own data and present the result, eliminating another job?

The reality is that obtaining useful answers to archaeological questions usually requires  various intermediate stages of data processing (sometimes in a different programme from the one that will perform the ‘main’ processing), initial analysis, further analysis and statistical validation. But even this list doesn’t really convey the actual role of the archaeologist or why we couldn’t just programme the computer to undertake all those stages. To really understand why human input is required throughout the process we need to look at how an archaeologist interacts with a computer programme to obtain useful answers to their questions, where the process is or could be automated, and where it relies upon professional judgement and experience.

I have long thought that some of the public anxiety and media hype about the rise of the machines exaggerates the reality of what technology can actually achieve. While it’s clear that many jobs will be automated in the future and we need to deal with the political and economic effects of that, to truly understand which jobs will disappear we need to unpick the details of our professions and truly consider which elements could be automated and which either require, or are faster, when undertaken by a human.

My own GIS research into visibility (often called ‘viewshed analysis’) has given me some insights into how difficult it would be for a computer to be an effective archaeologist. It has long been possible for a GIS programme to rapidly and efficiently calculate visibility from a given point, either to another point (i.e. line of sight) or more generally across the landscape (generating what is called ‘a viewshed’). To do this it needs only a digital terrain model of the topography and the point from which visibility is to be calculated. But knowing what is visible from say the Great Pyramid, or Stonehenge, doesn’t actually answer any particularly exciting archaeological questions. Even the most basic archaeological question – where could the Great Pyramid be seen from – requires us to both obtain more information and make judgments, judgements a computer couldn’t make. Firstly we must decide who is doing the seeing. If we are talking about people walking about on the ground, we need to know how tall they were. If we are interested in people within a nearby city or temple, we need to know both how tall they were and how tall was any structure they were standing on (the city walls perhaps?). The most basic of archaeological questions requires us to obtain more information and make professional judgements about the nature of nearby structures and the heights of the population.  And we still haven’t really learned anything useful yet – the Great Pyramid is obviously large and obviously very visible, so we didn’t need a computer to tell us it could be seen from a large area.

To really answer interesting questions about visibility at Giza we need to interact further with our GIS programme. We’ve now determined the height of the population and any relevant structures and calculated precisely where the Great Pyramid could be seen from. Why don’t we repeat the process for the other two kingly pyramids at Giza? That might provide us with useful archaeological information, such as are there any areas where all the pyramids could be seen? Are there any areas where they were all invisible? Do those areas correlate with any specific archaeological sites? These questions might provide us with really interesting answers. But to answer them we need to interact with the GIS in stages, re-running the analysis for each pyramid, then combining the results. This involves several procedures today, but even if we could code the programme to run through the sequence by itself, the results alone tell us nothing useful archaeologically.  We’d need to look at the areas from which the three pyramids are visible or invisible and use our archaeological knowledge and experience to consider if there are any sensible archaeological reasons they might have been excluded or included. Are there any archaeological sites that might have required a view of all pyramids (the capital Memphis or the temple of Heliopolis for example)? If so, do we think, based on our knowledge of ancient Egyptian culture that a deliberate decision was made to ensure the three pyramids were all visible from those sites? Can we perform a statistical analysis to show that our results are statistically significant and aren’t just coincidence? Or can we demonstrate by analysing lots of other locations on the Giza plateau, that the locations of these pyramids were the only ones that ensured a consistent view of all three pyramids from, for example, the capital of Memphis or the temple of Heliopolis?

Each stage of this putative research involves GIS analysis, from the initial viewshed showing where the Great Pyramid could be seen, to the last investigation of the viewsheds of other locations without pyramids across the Giza plateau. While the computer performs various specific analyses at each stage, it is the archaeologist who turns computerised assessments of the visibility of individual pyramids and locations on the Giza plateau into a genuinely interesting piece of research investigating where the three pyramids could be seen from and if that is both statistically significant (i.e. it isn’t coincidental) and culturally significant (i.e. it is consistent with Egyptian culture). At each stage the archaeologist is required to exercise both experience and judgement, in collecting data and setting parameters such as the height of the population, evaluating the results of the computer analysis with reference to archaeological data such as the locations of Memphis and Heliopolis, and directing the next stage of the research towards answering an archaeologically interesting question about the motives governing the positing of the Giza pyramids.

In this particular example, and in most computerised or technical archaeological analyses, the archaeologist is the keystone that holds the digital analyses together, forming them into a coherent piece of research that answers an archaeologically interesting question. The archaeologist is only able to do that because they have experience in the technical and cultural aspects of their subject and are able to make rational judgments based on that experience, which direct the research towards the often uncertain  goal of answering useful and interesting archaeological questions. We might one day create a computer that can do this, but no modern computer can even begin to perform that synthetic but instinctual task of guiding a developing project towards an amorphous goal. A goal that often changes as the evidence develops, while taking due account the constraints implied by the specific Egyptian culture and archaeological context.

While reassuring us about the potential for human archaeology during the rise of the machines, clear consideration of exactly how we work with and interact with technology is also to be welcomed for other reasons. A better understanding of the role of technology within scientific disciplines like archaeology will mean consumers of archaeological information and results will better understand the accuracy and limitations of those results and hopefully will be less likely to be ‘blinded by science’. It should result in greater respect for the ‘technical archaeologists’, who are sometimes sidelined as ‘operators’ and ‘technicians’, and a better understanding of the complexities involved in obtaining genuine answers to archaeological research questions using technology. I suspect that this latter issue, in particular, will become surprisingly important over the next decade. We have seen a huge technological step forward in terms of the variety of data, analytical techniques and computer programmes that are available, but unlike the previous generation of technological advances (such as Carbon 14 dating or residue analysis), the application of more recent techniques to archaeological data in order to answer research questions is not always straightforward. This has led to a certain amount of technically-driven archaeology, where a new technique is applied to archaeological data but not incorporated into a theoretical or analytical framework for answering meaningful archaeological questions (this is sometimes called ‘technological determinism’).  There’s nothing wrong with applying new techniques to archaeology, of course, but they need to be applied in a way that is archaeologically meangingful. My own research into Egyptian quarries isn’t intended to develop or showcase brand new technology, but to apply recently developed techniques to answering interesting, and often previously unanswerable, research questions. If we are to do high quality archaeological research and move beyond the excitement of new technologies, we need to actively consider the processes by which we move from technical analysis to answering research questions. And while we’re at it, we might be able to help out our scientific colleagues and wider society. By demonstrating how to make technologically cutting-edge work meaningful, we can show that the imposition of the human scientist into the technological process is a necessity that cannot simply be replaced by a computer algorithm.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.