Observed patterns around major technological advancements

25 Min Read

by Rick Korzekwa, 2 February, 2022

On this publish I define obvious regularities in how main new technological capabilities and strategies come about. I’ve not rigorously checked to see how broadly they maintain, nevertheless it appears more likely to me that they generalize sufficient to be helpful. For every sample, I give examples, then some comparatively speculative explanations for why they occur. I end the publish with an overview of how AI progress will look if it sticks to those patterns. The patterns are:

  • The primary model of a brand new expertise is nearly all the time of very low sensible worth
  • After the primary model comes out, issues enhance in a short time
  • Many massive, well-known, spectacular, and vital advances are preceded by lesser-known, however nonetheless notable advances. These lesser-known advances usually comply with a protracted interval of stagnation
  • Main advances are likely to emerge whereas related specialists are nonetheless in broad disagreement about which issues are more likely to work

Whereas investigating main technological developments, I’ve observed patterns among the many applied sciences which were dropped at our consideration at AI Impacts. I’ve not rigorously investigated these patterns to see in the event that they generalize effectively, however I believe it’s worthwhile to put in writing about them for a number of causes. First, I hope that others can share examples, counterexamples, or intuitions that may assist make it clearer whether or not these patterns are “actual” and whether or not they generalize. Second, supposing they do generalize, these patterns would possibly assist us develop a view of the default manner through which main technological developments occur. AI is unlikely to adapt to this default, nevertheless it ought to assist inform our priors and function a reference for a way AI differs from different applied sciences. Lastly, I believe there’s worth in discussing and creating frequent data round which features of previous technological progress are most informative and decision-relevant to mitigating AI threat.

What I imply by ‘main technological developments’

All through this publish I’m referring to a considerably slim class of advances in technological progress. I received’t attempt to rigorously outline the boundaries for classification, however the sorts of applied sciences I’m writing about are related to vital new capabilities or clear turning factors in how civilization solves issues. This contains, amongst many others, flight, the telegraph, nuclear weapons, the laser, penicillin, and the transistor. It doesn’t embrace achievements that aren’t primarily pushed by enhancements in our technological capabilities. For instance, lots of the construction top or ship dimension advances included in our discontinuities investigation appear to have occurred primarily as a result of somebody had the means and want to construct a big ship or construction and the supplies and strategies had been obtainable, not due to a considerable change in identified strategies or obtainable supplies. It additionally doesn’t embrace applied sciences granting spectacular new capabilities that haven’t but been helpful outdoors of scientific analysis, comparable to gravitational wave interferometers.

I’ve tried to be per terminology, however I’ve in all probability been a bit sloppy in locations. I exploit ‘development’ to imply any discrete occasion through which a brand new functionality is demonstrated or a tool is created that demonstrates a brand new precept of operation. In some locations, I exploit ‘achievement’ to imply issues like flying throughout the ocean for the primary time, and ‘new methodology’ to imply issues like utilizing nuclear reactions to launch vitality as a substitute of chemical reactions.

Issues to remember whereas studying this:

  • The first function of this publish is to current claims, to not justify them. I give examples and explanation why we’d count on the sample to exist, however my purpose is to not make the strongest case that the patterns are actual.
  • I make these claims primarily based on my total impression of how issues go, having spent a part of the previous few years doing analysis associated to technological progress. I’ve finished some trying to find counterexamples, however I’ve not made a critical effort to investigate them statistically.
  • The proposed explanations for why the patterns exist (in the event that they do certainly exist) are largely speculative, although they’re partially primarily based on well-accepted phenomena like expertise curves.
  • There are some apparent sources of bias within the pattern of applied sciences I’ve checked out. They’re principally issues that started off as candidates for discontinuities, which had been crowd sourced after which examined in a manner that strongly favored giving consideration to developments with clear metrics for progress.
  • I’m undecided find out how to exactly supply my credence on these claims, however I estimate roughly 50% that a minimum of one is usually fallacious. My greatest guess for a way I’ll replace on new info is by narrowing the related reference class of developments.
  • Though I did discover these patterns a minimum of considerably alone, I don’t declare these are unique to me. I’ve seen associated issues in a wide range of locations, significantly the e-book Patterns of Technological Innovation by Devendra Sahal.
See also  Time Travel with AI: Theoretical Implications and Possibilities

The primary model of a brand new machine or functionality is nearly all the time horrible

Alcock and Brown's Vickers Vimy after crossing the North Atlantic and landing in Ireland
The primary transatlantic flight was a hit, however simply barely.

More often than not when a significant milestone is crossed, it’s finished in a minimal manner that’s so unhealthy it has little or no sensible worth. Examples embrace:

  • The primary ship to cross the Atlantic utilizing steam energy was slower than a quick ship with sails.
  • The primary transatlantic telegraph cable took a complete day to ship a message the size of a pair Tweets, would cease working for lengthy durations of time, and failed fully after three weeks.
  • The primary flight throughout the Atlantic took the shortest affordable path and crash-landed in Eire.
  • The primary equipment for purifying penicillin didn’t produce sufficient to save lots of the primary affected person.
  • The primary laser didn’t produce a brilliant sufficient spot to {photograph} for publication

Though I can’t say I predicted this could be the case earlier than studying about it, looking back I don’t suppose it’s shocking. As we work towards an invention or achievement, we enhance issues alongside a restricted variety of axes with every design modification, oftentimes on the expense of different axes. Within the case of enhancing an present expertise till it’s match for some new activity, this implies optimizing closely on whichever axes are most vital for undertaking that activity. For instance, to switch a World Struggle I bomber to cross the ocean, you should optimize closely for vary, even when it requires giving up different options of sensible worth, like the power to hold ordinance. Rising vary with out making such tradeoffs makes the issue a lot more durable. Not solely does this make it much less probably you’ll beat rivals to the achievement, it means you should do design work in a manner that’s much less iterative and requires extra inference.

This additionally applies to constructing a tool that demonstrates a brand new precept of operation, just like the laser or transistor. The best path to creating the machine work in any respect will, by default, ignore many pointless sensible issues. Skipping straight previous the dinky minimal model to the sensible model is troublesome, not solely as a result of designing one thing is more durable with extra constraints, however as a result of we will be taught lots from the dinky minimal model on our method to the extra sensible model. This seems associated to why Theodore Maiman received the race to construct a laser. His rivals had been pursuing designs that had clearer sensible worth however had been considerably tougher to get proper.

The clearest instance I’m conscious of for a expertise that doesn’t match this sample is the primary nuclear weapon. The necessity to create a weapon that was helpful in the true world shortly and in secret utilizing extraordinarily scarce supplies drove the scientists and engineers of the Manhattan Challenge to unravel many sensible issues earlier than constructing and testing a tool that demonstrated the essential ideas. Absent these constraints, it could probably have been simpler to construct a minimum of one check machine alongside the way in which, which might more than likely have been ineffective as a weapon.

Progress usually occurs extra shortly following a significant development

In lower than ten years, nuclear weapons superior from giant units with a yield of 25 kilotons (left) to artillery shells with a yield of 15 kilotons (high proper) and enormous 25 megaton units (decrease proper)

New applied sciences might begin out horrible, however they have a tendency to enhance shortly. This isn’t only a matter of making use of a relentless charge of fractional enchancment per yr to a rise in efficiency—the doubling time for efficiency decreases. Examples:

  • Common velocity to cross the Atlantic in wind-powered ships doubled over roughly 300 years, after which crossing velocity for steam-powered ships doubled in about 50 years, and crossing velocity for plane doubled twice in lower than 40 years.
  • Telecommunications efficiency, measured because the product of bandwidth and distance, doubled each 6 years earlier than the introduction of fiber optics and doubled each 2 years after
  • The vitality launched per mass of typical explosive elevated by 20% throughout the 100 years main as much as the primary nuclear weapon in 1945, and by 1960 there have been units with 1000x the vitality density of the primary nuclear weapon.
  • The very best depth obtainable from synthetic mild sources had a median doubling time of about 10 years from 1800 to 1945. Following the invention of the laser in 1960, the doubling time averaged simply six months for over 15 years. Lasers additionally discovered sensible functions in a short time, with profitable laser eye surgical procedure inside 18 months and the laser rangefinders a number of years after that.
The speed of progress in carrying a navy payload throughout the Atlantic elevated every time the strategy modified

This sample is much less strong than the others. An apparent counterexample is Moore’s Regulation and associated trajectories for progress in laptop {hardware}, which stay remarkably easy throughout main modifications in underlying expertise. Nonetheless, even when main technological advances don’t all the time speed up progress, they appear to be one of many main causes of accelerated progress.

See also  YOLOv9: Advancements in Real-time Object Detection (2024)

Quick-term will increase within the charge of progress could also be defined by the wealth of potential enhancements to a brand new expertise. The primary model will virtually all the time neglect quite a few enhancements which can be straightforward to seek out. Moreover, as soon as a the expertise is out on this planet, it turns into a lot simpler to learn to enhance it. This may be seen in expertise curves for manufacturing, which usually present giant enhancements per unit produced in manufacturing time and value when a brand new expertise is launched.

I’m undecided if this will clarify long-running will increase within the charge of progress, comparable to these for telecommunications and transatlantic journey by steamship. Perhaps these applied sciences benefited from a big house of design enhancements and adequate, sustained progress in adoption to push down the expertise curve shortly. It could be associated to exterior components, like the appearance of steam energy growing normal financial and technological progress sufficient to keep up the sturdy pattern in developments for crossing the ocean.

Main developments are often preceded by speedy or accelerating progress

Searchlights throughout World Struggle II used arc lamps, which had been, so far as I do know, the primary synthetic sources of sunshine with larger depth than concentrated daylight. They had been surpassed by explosively-driven flashes throughout the Manhattan Challenge.

Main new technological advances are sometimes preceded by lesser-known, however substantial developments. Whereas we had been investigating candidate applied sciences for the discontinuities mission, I used to be usually shocked to seek out that this previous progress was speedy sufficient to remove or drastically cut back the calculated dimension of the candidate discontinuity. For instance:

  • The invention of the laser in 1960 was preceded by different sources of intense mild, together with the argon flash within the early Nineteen Forties and numerous electrical discharge lamps by the Nineteen Thirties. Progress was pretty gradual from ~1800 to the Nineteen Thirties, and more-or-less nonexistent for hundreds of years earlier than that.
  • Penicillin was found throughout a interval of main enhancements in public well being and coverings for bacterial infections, together with a drug that was such an enchancment over present therapies for an infection it was known as the “magic bullet”.
  • The Haber Course of for fixing nitrogen, usually given substantial credit score for enhancements in farming output that enabled the large inhabitants progress of the previous century, was invented in 1920. It was preceded by the invention of two different processes, in 1910 and 1912, the latter of which was greater than an element of two enchancment over the previous.
  • The telegraph and flight each crossed the Atlantic for the primary time throughout a roughly 100 yr interval of speedy progress in steamships, throughout which crossing occasions decreased by an element of 4.

This sample appears much less strong than the earlier one and it’s, to me, much less placing. The accelerated progress previous a significant development varies broadly by way of total impressiveness, and there’s not a transparent cutoff for what ought to qualify as becoming the sample. Nonetheless, after I to provide you with a transparent instance of one thing that fails to match the sample, I had some issue. Even nuclear weapons had been preceded by a minimum of one notable advance in typical explosives a number of years earlier, following what appears to have been a long time of comparatively sluggish progress.

An apparent and maybe boring clarification for that is that progress on many metrics was accelerating throughout the roughly 200-300 yr interval when most of those happened. Technological progress exploded between 1700 and 2000, in a manner that appears to have been accelerating fairly quickly till round 1950. Each level on an aggressively accelerating efficiency pattern follows a interval of unprecedented progress. It’s believable to me that this totally explains the sample, however I’m not solely satisfied.

An extra contributor could also be that the drivers for analysis and innovation that end in main advances are likely to trigger different developments alongside the way in which. For instance, wars improve curiosity in explosives, so possibly it’s not shocking that nuclear weapons had been developed across the identical time as some advances in typical explosives. One other potential driver for such innovation is a disruption to incremental progress through present strategies that requires exploration of a broader answer house. For instance, the invention of the argon flash wouldn’t have been mandatory if it had been potential to enhance the sunshine output of arc lamps.

See also  Unveiling Neural Patterns: A Breakthrough in Predicting Esports Match Outcomes

Main developments are produced by unsure scientific communities

Alexander Fleming (left foreground) receives his Nobel Prize for his discovery of penicillin, together with Howard Florey (again left) and Ernst Chain (again, second from left). Florey and Chain pursued penicillin as an injectable therapeutic, despite Fleming’s insistence that it was solely appropriate for treating floor infections.

Almost each time I look into the main points of the work that went into producing a significant technological development, I discover that the related specialists had substantial disagreements about which strategies had been more likely to work, proper up till the development takes place, typically about pretty basic items. For instance:

  • The group accountable for designing and working the primary transatlantic telegraph cable had disagreements about primary properties of sign propagation on very lengthy cables, which ultimately led to the cable’s failure.
  • The scientists of the Manhattan Challenge had broadly various estimates for the weapon’s yield, together with scientists who anticipated the machine to fail solely.
  • Widespread skepticism, misunderstanding, and disagreement of penicillin’s chemical and therapeutic properties had been largely accountable for a ten yr delay in its growth.
  • Few researchers concerned within the race to construct a functioning laser anticipated the design that in the end prevailed to be viable.

The default clarification for this appears fairly clear to me—empirical work is a crucial a part of clearing up scientific uncertainty, so we should always count on profitable demonstrations of, novel, impactful applied sciences to remove a whole lot of uncertainty. Nonetheless, it didn’t should be the case that the eradicated uncertainty is substantial and associated to primary details concerning the functioning of the expertise. For instance, the groups competing for the primary flight throughout the Atlantic didn’t appear to have something main to disagree about, although they might have made totally different predictions concerning the success of assorted designs.

There’s a distinction to be made right here, between excessive ranges of uncertainty on the one hand, and low ranges of understanding on the opposite. The communities of researchers and engineers concerned in these tasks might have collectively assigned substantial credence to varied mistaken concepts about how issues work, however my impression is that they did a minimum of agree on the essential outlines of the issues they had been making an attempt to unravel. They principally knew what they didn’t know. For instance, a lot of the disagreement about penicillin was concerning the particulars of issues like its solubility, stability, and therapeutic efficacy. There wasn’t a disagreement about, for instance, whether or not the penicillium fungus was inhibiting bacterial progress by producing an antibacterial substance or by another mechanism. I’m undecided find out how to operationalize this distinction extra exactly.

Relevance to AI threat

As I defined initially of this publish, one in all my objectives is to develop a typical case for the event of main technological developments, so we will inform our priors about AI progress and take into consideration the methods through which it might or might not differ from previous progress. I’ve my very own views on the methods through which it’s more likely to differ, however I don’t need to get into them right here. To that finish, here’s what I count on AI progress will seem like if it suits the patterns of previous progress.

  • Main new strategies or capabilities for AI can be demonstrated in techniques which can be usually fairly poor.
  • Below the fitting circumstances, comparable to a multi-billion-dollar effort by a state actor, the primary model of an vital new AI functionality or methodology could also be sufficiently superior to be a significant world threat or of very giant strategic worth.
  • An early system with poor sensible efficiency is more likely to be adopted by very speedy progress towards a system that’s extra precious or harmful
  • Progress main as much as an vital new methodology or functionality in AI is extra more likely to be accelerating than it’s to be stagnant. Notable advances previous a brand new functionality might not be direct ancestors to it.
  • Though high-risk and transformative AI capabilities are more likely to emerge in an setting of much less uncertainty than right this moment, the feasibility of such capabilities and which strategies can produce them are more likely to be contentious points inside the AI group proper up till these capabilities are demonstrated.

Due to Katja Grace, Daniel Kokotajlo, Asya Bergal, and others for the mountains of knowledge and evaluation that the majority of this text is predicated on. Due to Jalex Stark, Eli Tyre, and Richard Ngo for his or her useful feedback. All views and errors are my very own.

All charts on this publish are unique to AI Impacts. All pictures are within the public area.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.