top of page

SAT (Part 12): Transitioning Our Focus Towards Sciences and Technologies

Writer's picture: Bryant RogersBryant Rogers

Updated: Jan 6

Part Twelve: Seeing how social accreditation processes spread and scale with scientific breakthroughs and new technologies.


Throughout these blogs, we’ve explored a lot about how social accreditation shapes the norms and behaviors that we all end up following. With the emphasis being that this happens whether we realize it or not.


Just as we recapped in part eleven, SAT explains how we establish new standards and norms through explicit and implicit validations and sanctions and then how we as individuals either reinforce or challenge these standards over time as we choose to conform with or deviate from them in different settings. This entire process creates a dynamic system where our individual identities and collective norms continuously evolve together.


In part ten, we also discussed how hierarchical systems of power and authority rely on these mechanisms of accreditation to set and enforce their standards and how these same social mechanisms can evolve to challenge or disrupt these systems. Now, I want to take that idea and show how it also applies to science and technology, two fields that we often think of as naturally objective but are actually just as subject to social approval as anything else.


Our very perceptions of science and technology can illustrate how specific standards and validations evolve within these dynamic systems. Both concepts have naturally emerged as frames for accreditation mechanisms within social networks and bureaucratic hierarchies, and they're influenced by the same power systems that shape social approval and sanctions.


From the Oxford dictionary definitions, Science is understood as a systematic study of the physical and natural world, involving observation, experimentation, and the testing of theories against evidence. Technology, on the other hand, refers to the practical application of scientific knowledge, particularly in regards to the development and use of tools and equipment through scientific advancements.


On The Social Accreditation of the Scientific Method

As I’ve tried to outline in previous posts, accreditation wasn’t always about bureaucratic standards, peer-reviewed studies or academic consensus. Before the Scientific Revolution, legitimacy came mostly from notions of divine authority. Since it was the church and the crown that determined who and what was deemed worthy or valid, nearly all achievements in literature, art, military victories, industrial progress, or leadership were always attributed to divine favor, or other inherited traits like nobility, honor, or sheer luck.


These early accreditation systems created powerful social norms and mythological expectations, in which people internalized the belief that character development primarily happens through the emulation or practice of those who’ve achieved these same kinds of feats. In an honor-based social system, you'd probably mimicked the best hunter, or the wisest elders. In a nobility or divine based society, your norms and values came from local religious leaders or noble houses.


Then the scientific method disrupted all of those systems. It introduced a new process that prioritized discovery and empowered people to challenge long-held beliefs. This shift replaced divine and aristocratic validation with something new: the democratized pursuit of knowledge, grounded in observation and evidence. But, even this new system wasn’t free from social influence. It simply established a different set of rules for accreditation, ones that relied on new explicit structures like academic recognition, publications, patents, and other institutional endorsements.


Rationalism, or the general belief that humans can use reason to gain knowledge, was a dramatic shift from the idea that everyone needed to rely on scripture or church leaders for moral guidance and answers to life's mysteries. The spread of rational values led to new ways of doing research, like peer-reviewed publications, lab experiments, and getting patents, and these became ways to prove or support beliefs or ideas in the academic world.


The scientific method was also heavily influenced by empiricism, which prioritizes observation, experimentation, and sensory experience as the basis for knowledge. The foundation of empirical knowledge lies in its reliance on observation or experience to establish its validity. Science, as a process, is deeply rooted in the systematic testing of theories against well-established evidence. For instance, when I step outside and take a moment to look up, I can confirm the fact that “the sky is blue.”


The social consensus of these observations creates implicit and explicit ways to validate them as scientific knowledge. For example, knowing that water boils at 100°C is an empirical fact that we can understand from experimentation. The term “boiling point” then becomes a reference for the explicit validation of the shared knowledge of this property. This way of understanding the world became very useful for a lot things.


In philosophy, "a posteriori" is what we call knowledge that's derived from experience. It's dependent on the information we gather from our senses as empirical evidence, and it can vary based on circumstances. Rational philosophers like Descartes and Newton were also focused on reductionism, simplifying complex phenomena into fundamental principles.


Empirical reductionism, the idea that we can logically understand and know everything about the universe at its fundamental level, led to the scientific method, which adapted to reflect the belief that all knowledge can be reduced to its simplest components, and derived from observation and experience.


So, for instance, the behavior of the body can be explained by the behavior of its organs, which can be explained by the behavior of the cells within the organs, which can be explained by the behavior of the chemicals within the cells, and so on.


Science and technology became essential for improving our own observational methods and understanding our experiences because we constantly needed new tools and wider reaching theories to explore further levels of this reduction; think microscopes and microbiology, or even something more fundamental like particle physics and atomic colliders.


This is what led to the standardization of the human condition, that is, the homogenization of certain characteristics and key events that we acknowledge as canonical parts of the narrative human life; a new social consensus around things like our birth, learning and stages of development, emotions, aspirations, ability to reason, morality, inner conflict, and perspectives of death, emerged in Western Civilization from this belief that humans can continue to improve and advance ourselves through social interactions and manipulating the natural world.


This focus on science, technology, and social organization as universal applications of reason and observation led to a relentless pursuit of advancement through social reform that became known as progressivism, and it guided Western Civilizations through the "Age of Enlightenment" and the "Industrial Age."


Progressivism spread out of a belief that civility in Europe was improving and spreading, in thanks to the application of these new types of empirical knowledge. Thinkers during this era continued to operate within the accreditation frameworks of religious faith and royal nobility, which still conferred their legitimacy. But they were increasingly inspired to employ reason and inquiry in their quests to study the world in pursuit of new forms of social recognition.



For example, American Freemason inventor, Ben Franklin famously observed that lightning strikes tended to hit metal objects and he reasoned that he could therefore direct lightning through the movement of metal objects during an electrical storm. He shared his findings through a series of letters to a man at the Royal Society of London, named Peter Collinson and Collinson published them in 1751 in a treatise called "Experiments and Observations on Electricity". The explicit endorsement of Franklin's ideas from Royal Society members like Sir William Watson led to the success of Franklin and his inventions like the lightning rod.


Following this drive for rationality through the Enlightenment, thinkers began to argue that while empirical observations (a posteriori) are important, they must still be understood within the framework of "a priori" concepts that the mind imposes. “A priori” knowledge is knowledge that's independent of our experience, coming from pure analysis or deduction.


This means it’s known to be true without needing to rely on sensory experience or empirical evidence. One common example of analytic, "a priori" knowledge is the proposition, "all bachelors are unmarried." You don't need to make observations or experiment to understand this. It’s just true by definition.



Philosopher, Imannuel Kant took this further when he introduced a third category to challenge the traditional view of knowledge. What he called "Synthetic A Priori Judgments." These are statements that are independent of experience (a priori) but also what he called synthetic, meaning that they add new information beyond their definitions.


So a statement like “every event has a cause” is said to be "synthetic a priori" because it's not just true by definition but also a fundamental principle that shapes how we perceive the world.


For Kant, this new distinction was crucial because it allows for meaningful, independent knowledge that isn’t purely analytic or just defining terms. He believed that “synthetic a priori” judgments form the cornerstone of human knowledge in mathematics and natural sciences.


This synthesis laid the groundwork for a more systematic and organized scientific method that seamlessly integrated observation with a well-defined conceptual framework. This advancement enabled scientific methods to make significant strides in domains such as physics, chemistry, and biology.


In turn, these developments had profound social implications, fostering a worldview where reason and evidence weren’t just guiding scientific inquiry, but also held the potential to analyze and enhance our own society. This paradigm shift revolutionized various fields, including economics, psychology, sociology, and politics, paving the way for the expansion of education and scientific literacy across larger parts of society.


It’s easy to overlook that, until recently, philosophers and priests held the sole authority in determining what constitutes knowledge. This shift towards reason as a source of knowledge led to a profound transformation in our understanding of the world. It wasn’t that people hadn’t utilized reason before; rather, the ability to engage in rational arguments was expanded to encompass phenomena that were previously inaccessible during the age of reason. But, despite our emphasis on rational thinking, the recognition of something as “scientific” still remains heavily influenced by social norms.


Similarly, our technological innovations might depend on formal recognition and validation to gain traction, like with Ben Franklin's lightning rod. But it’s the implicit forms of accreditation that determine how rapidly any new technologies or scientific discoveries get integrated into our society. I work as a technology department manager in a retail store on a college campus, and I get a first hand look at this everyday.


On the Accreditation of Technologies in Social Systems

I enjoy and dislike aspects of my job, but it’s definitely ignited a curiosity for technology. Although my job itself isn't technical, I’m still usually expected to be an expert on all things tech from the perspective of my customers so I spend a lot of time trying to understand more. I’m a big fan of learning about the tech behind devices, from the hardware to the software and even the historical context. But it’s frustrating when people ignore my advice and only focus on the things that seem most important to them, like the price, battery life, screen size, or speaker volume. And then there are those who come in and say they don’t know anything about tech, and expect me to sell them whatever I think is best. It’s tough to balance my expertise with their needs.


I started learning about computer technology to communicate more effectively at work and to understand the average consumer’s perception of value in relation to the economic means of commodity production. I now understand that people interpret technology differently, and its capabilities depend on social relationships and user applications. I challenged my own understanding and became interested in how things are made, invented, and obsolete. I learned that certain technologies grow and gain acceptance because of social consensus, which can be mediated by other technologies.


As I also covered in part six, the Social Construction of Technology (SCOT) reminds us that technology doesn't determine human action; rather, human action shapes technology. Whether it’s cultural values, political pressures, or economic interests, these all play a role in shaping what technologies get developed and, more importantly, what gets adopted. SAT embodies this SCOT mindset because it shows us that accreditation isn’t just about what works or what’s true, but also about what society agrees has value.


When a technology is first introduced, it usually gets explicit accreditation from official channels like patents, peer-reviewed publications, or endorsements by industry experts.

These official accreditation mechanisms provide initial validation by verifying to the public the technology’s novelty, utility, or scientific merit. But that’s just the first step. For a tech to really embed itself into society, it needs to become a part of our everyday lives, and people need to start using it.


This means that our mythology around the technology has to adapt to fit into the narrative we make about ourselves. Recall our conversation in part eleven about Sen's capability approach. I mentioned how he used the example of a bicycle to show how resources alone don’t guarantee our freedom or capability. Although a bicycle can expand or enhance our mobility, it's only if the person using it has the physical ability to ride, knows how to balance and pedal, and has access to roads or safe pathways. The effective power of the bicycle depends not just on the object itself but on the broader context that enables its use.


Sen’s insights remind us that the societal embedding of a technology hinges on its ability to interact with and enhance individual and collective narratives, reshaping how we see ourselves and what we can achieve. Accreditation, then, isn’t just about the technology itself but about its ability to adapt to and enhance the lives of those who use it. Technologies are developed in anticipation of augmenting our skills and behaviors, but their recognition and validation can change depending on how the product or service adapts to changing social cues, regardless of the original intent of the innovation or design.


The concept of genericization reflects this shift vividly. This is when a product becomes so widely used that people refer to the product category itself by the brand name. This can happen with adapting social behaviors from products like for example the transformation of terms like “Google” for web searching, or "Tweeting" for posting status updates. But it can also be the case for associations with physical products like “Band-Aids” for adhesive bandages, or “Kleenex” for tissues.


Genericization doesn't just work with brand recognition, and through SAT we can see how this effect can also cause us to generalize our understanding of a product or technology based on this kind of accreditation. Take "USB" (Universal Serial Bus,) for instance, an acronym for a certified standard of a connection has become so synonymous with cable ports and connection technology that most people can't differentiate between different USB types, or they don’t even know the meaning of USB. This works with other explicit types of accreditation like trademarks, titles, and awards.


The fact that we socially adapt these explicit brand or standard names into generic terms shows that these technologies have become so ingrained in everyday life that they transcend their original identities. They’re no longer just unique innovations that need to be recognized through branding and patents. They’re part of our everyday language, showing how quickly they’ve become accepted by society. This process of genericization is a sign that these technologies are getting the social approval they need to become mainstream.


On the other hand, something like commoditization highlights a different facet of social accreditation. When a product or service becomes commoditized, it's no longer differentiated by its unique features or brand but is instead judged solely by factors like price or availability. We also talked about this in part ten with our discussion of Marx, Debord, and commodity fetishism.


This shift happens when a technology becomes so widely adopted that its unique attributes lose all significance; consumers come to see it as a standardized good rather than a groundbreaking innovation. Examples of this can be seen in the commoditization of cars, personal computers, smartphones, or even cloud storage services. In these cases, the technology’s initial explicit accreditation fades into the background, while its implicit social acceptance transforms it into a basic necessity. This is something I see everyday in my job.


When genericization and commoditizaton occur simultaneously, they can strip technology of its unique identity, further disconnecting consumers from creators. Our implicit use and endorsements of a technology are so powerful that our collective acceptance can cause us to overlook its significance in other contexts or how we comprehend and appreciate the processes involved in presenting it to us.


For most people, the term “USB cable” means pretty much any wired connection. But there are actually a lot of factors that make different "USB cables" better or worse. These include the design of the connectors, how fast they can transfer data and power, the quality of the materials used, and how well they’re wired. All these things can make a big difference in the value of a "USB cable". But since these factors are so complex and technical, most people just use heuristics or make assumptions to figure out which cables might work for them. Despite its labor, design, and technical achievements, the value of the "USB cable" to the average consumer is hardly more than that of a similar-sized piece of string.


When we think of products and services in a generic, mass-produced way, we can’t really appreciate their unique qualities. Our choices are influenced by what others think, and this affects how we value technology and even stops us from coming up with new ideas. This can be a tough cycle for any industry to break. But it also disconnects people from the technology and social relationships behind the products. Similar to Debord’s spectacle, the social representation of technology gains significance over the actual labor and energy involved in its creation. The complexity of these labor and energy processes are so far beyond most people’s comprehension, it makes them seem unfathomable.


Think about the adoption of Bluetooth technology, how ubiquitous it has become, and the contrast to the public's understanding of how Bluetooth technology works. For most people, Bluetooth may as well be magic. Yet we use and rely on the technology through different products everyday, and we feel perfectly comfortable making value judgements about Bluetooth-enabled devices despite not really knowing how the technology works. Bluetooth is a wireless network protocol that relies on accredited standards that evolve from institutional endorsements and scientific research.


The evolution from explicit accreditation (through patents and awards) to implicit, collective endorsement (becoming a household name or a commodity) aligns with the Social Construction of Technology (SCOT) framework. SAT builds on this framework by demonstrating that technologies are accredited not just because they “work” or are technically sound but because they resonate with what society deems valuable and necessary.


This is how understanding how accreditation evolves helps us grasp why some technologies become ubiquitous while others fade into obscurity. And its important because technologies that get the right kind of collective validation can become so integral to our lives that we no longer see them as innovations but as essential utilities.


This shift is what drives social phenomena like genericization and commoditization, as technologies mature from being viewed as novel breakthroughs to becoming indispensable parts of everyday life. This insight also highlights the recursive nature of social accreditation, where these technologies actively shape societal norms and expectations in accordance with the social validation they receive.



The iPhone is a great example, it first launched in 2007 and became an instant global sensation, selling 270,000 units in its first week and ultimately reshaping our norms and values around communication, entertainment, and even the economy. The iPhone redefined the concept of a smartphone and it became a symbol of modernity and connectivity. From social media to mobile gaming, from online shopping to digital payments, the iPhone has since revolutionized how we engage with information, services and networks. Its App Store created a new ecosystem for developers and entrepreneurs and fueled a new wave of innovation that continues to this day.


Moreover, the iPhone's impact extended to the economy by influencing new norms and trends in retail industries, advertising, and Software-As-A-Service development. Businesses had to adapt to the mobile-first mindset that was brought about by the iPhone, leading to new opportunities and challenges across global marketplaces. (Don't worry, we'll talk more about commercialization and how money, and markets are influenced by social networks in part 13.) The point is that the iPhone's success paved the way for further progress with the rise of other tech giants and the proliferation of smartphones as essential tools in our daily lives.



In contrast, the Segway, launched in 2001, and despite being innovative and cool, it struggled with usage limitations, factory recalls and an expensive price tag. It failed to ever gain widespread adoption outside of niche markets like tourist attractions and medical communities. Over its lifetime, only 140,000 units of the original Segway PT were sold.


The disparities here aren't just about the technologies' functionality, but also about the social and economic context these products were introduced in. The iPhone successfully fit into a growing demand for mobile connectivity and media consumption, while the Segway, despite its novelty, lacked a clear practical use for most consumers.


What’s even more interesting is that the feedback loops between society and technology are always shifting. Despite being considered a flop, the process of genericization has kept the name Segway relevant, with it now being synonymous with all kinds of personal transporters and hoverboards. Actually, the company doesn't even produce the original Segway model anymore, having found more success selling e-bikes and e-scooters.


Meanwhile, Apple just recently released their 16th generation of the iPhone, though today’s models are usually met with complaints about the lack of innovation, despite the fact that they're more powerful, durable, and capable than ever before. In this sense, the iPhone and other smartphones have become commoditized; consumers just see them as basic goods.


This shows how societal expectations and perceptions of technology can change over time.


On the Recursive Nature of Innovation and Invention

This recursive relationship between invention and innovation fuels progress. Each new development builds upon the previous ones, science enhances progress, leading to cheaper, faster, and more accessible technology over time.


This cycle also explains why such advanced tools are now so accessible to non-tech experts. However, it also means that people feel that they don’t need to have a deep understanding of the technology they use. This is why you don't really need to know how Bluetooth works to use Bluetooth devices.


This happens when society normalizes the divide between specialized scientific knowledge and a general understanding while technology simultaneously becomes so user-friendly that it integrates seamlessly into our daily lives. As a result, we no longer feel the need to understand how it works or think about the costs involved, because we associate those responsibilities with the specialized roles of the scientists and developers.


As we've seen in other blogs, when we separate specialized knowledge from general understanding, it can lead to some unanticipated outcomes, especially when it comes to ethics and agency. If users don’t get the hang of technology or its trade-offs, they might just give up control to manufacturers and corporations.


This disconnect can make people feel like they have no say in the technologies that shape their lives. Which can lead to ethical issues around privacy, surveillance, and environmental impact going unaddressed, because the important decisions are only made and understood by a few people.


On the brighter side, this user-friendly approach actually does make technology more accessible to everyone. By lowering the barriers to entry, more people can use, learn from, and improve these tools. This creates a healthy competition and speeds up progress.


In this way, the normalization of technology can be used to empower people and communities, no matter how tech-savvy they are. All of this is important when trying to understand why certain technologies and scientific advances are adopted and validated over others, and how technologies can spread despite not being completely understood by all of its consumers and users.


We can look at the development and spread of the internet as an example. From the late sixties up until 1995, the developing structure of the internet evolved from the military’s ARPANET project, which linked government officials, academics, scientists, and research institutions. What began as a close-knit network for accessing computing power soon transformed into an international platform for communication and connection, thanks to the accreditation of innovations like the TCP/IP framework or Ray Tomlinson’s email system.


With the creation of the World Wide Web by Tim Berners-Lee and other software applications like the Mosaic web browser by Marc Andreessen, the internet became a global phenomenon, connecting 10 million users by 1995. These innovations made using computer technology easier, and lowered the barrier for entry for more people to get connected. This explosive spread of the internet fueled even more rapid technological innovation, leading to a tech boom from 1998 to 2000 that was known as the “dotcom bubble.”


Today, the pace of tech adoption has only intensified. It's difficult to express how rapid this has become. In November 2022, OpenAI’s ChatGPT gained 1 million users in just five days.


For context, it took Netflix three and a half years to reach that milestone after switching to a subscription service in 1999, Facebook ten months in 2004, and Instagram, two and a half months in 2010. By 2023, Meta’s Threads platform set a new record by reaching 1 million user downloads in just one hour.



Of course, not everyone embraces this relentless march forward. History is full of resistance movements from people like the Luddites, who pushed back against industrialization, fearing the upheavals it brought. Today, we see similar concerns as people question AI, automation, vaccines, and even the impact of social media platforms. There’s always a social impulse to slow down or reject certain technologies altogether, showing that each wave of progress brings with it a countercurrent of caution and resistance.


Because at the end of the day, technology doesn’t just shape society; society shapes technology too, deciding what gets validated, what gets rejected, and what sticks around to define the next chapter of our collective future.

 

The World as Our Workshop: How Scientific Progress Drives Civilization


Over the years, I’ve learned a lot about science and technology, and how they shape the way we view the world. Science and technology drive progress and improve our quality of life. For example, medical research leads to life-saving treatments and understanding diseases, while renewable energy innovations pave the way for sustainability. These advancements enhance our understanding of the world and empower us to address pressing challenges collectively.


But I’ve also come to realize the ethical issues that come with science and new tech. SAT gives me a new context for understanding discourse and debates about the long-term effects of our actions and a lens to help visualize where we’re going.


But it wasn’t always this clear to me. Growing up, like most of us, I absorbed that dualistic way of thinking that says nature is over there, and humanity, with all our cities and technology, is over here.


Without even realizing it, we separate ourselves from the world we live in. We build systems of beliefs, traditions, and myths that make us feel in control, like we’ve figured out everything we could about our place in the grand scheme of things. But the more I’ve explored SAT and the works of great thinkers, the more I’ve realized how deep this disconnect runs and how important it is to close that gap.


The Journey to Now

We’re constantly creating narratives that help us share values, norms, and beliefs as a society. This is what we call civilization. This collective consensus drives how we validate what’s right, what works, and what’s real. And while this shared understanding helps us stay on the same page, it also shapes how we view ourselves in relation to the world around us.


We’ve spent centuries thinking of nature as something we manage, protect, or exploit. But in doing that, we’ve created this illusion that we’re outside of it, able to manipulate the natural world without being affected by it.


This way of thinking goes way back. In part nine we covered how people like Galileo, Bacon, Pascal, and Bayes became some of the greatest minds in history, in spite of their religious beliefs and laid the groundwork for scientific thinking. They developed the tools for understanding the world through reason and evidence, and as Robert Crease captures in his book, "The Workshop and the World", these thinkers, among many others, also helped shape scientific inquiry to become a tool for manipulating and controlling nature.


We also talked before about how Descartes, for instance, contributed to this divide with his dualism, which separated the mind from the material world. Descartes’ ideas became foundational in Western philosophy and science. Educational institutions, shaped by Enlightenment ideals, taught Cartesian dualism as a critical framework, embedding it in curricula across Europe and beyond.


As Cartesian dualism informed scientific progress, it also produced tangible insights that led to advancements in medicine and technology. These benefits reinforced societal belief in its frameworks, creating a positive feedback loop that solidified its place in cultural and intellectual life. Over time, Cartesian dualism became an implicit cultural norm through accredited validation by esteemed figures like scientists, philosophers, and theologians.


Its acceptance became less about people actually reading and studying Descartes’ texts and more about living within societal structures shaped by his ideas. Everyday concepts like the “mind-body connection” or distinctions between mental and physical health still reflect Cartesian thought today.


While Enlightenment thinkers pushed us toward rationalism, they also deepened our sense of detachment. Science gradually became less about understanding our place in the world and more about breaking it down into laws, formulas, and equations. This led to a series of discoveries about forces, cells, atoms, and other foundational phenomena, but it also intensified the tension between materialists and idealists, those who believed in a purely physical universe and those who argued that reality is shaped by perception and thought, respectfully.


The 19th century was a time when science and industry reshaped how people thought about the world and themselves. Darwin’s "Theory of Evolution" (1859) challenged traditional beliefs about human origins and the future. Simultaneously, thermodynamics and steam power revolutionized industry. The combination of these developments gradually sparked optimism about progress amidst growing levels of empathy shadowing concerns about slavery, dehumanization, and labor exploitation.


Coincidentally, this also led to rapid globalization, resulting in the implicit pursuit of worldwide interdependence and integration among all countries’ economies, markets, societies, and cultures. However, the conflicting ideas behind this pursuit would eventually succumb to the dynamics of tribalism and schismogenesis, leading to an ideological divide that manifested itself between fascists and capitalists, and later between capitalists and socialists. (more on that later, but we also briefly talked about this in part ten!)


In Part Five, we briefly explored how these transformations also catalyzed theories like social Darwinism, where anthropologists and sociologists applied evolutionary principles to sociology, economics, and politics. Cultural evolution and social Darwinism became tools for analyzing diverse societies thought to be barbaric or savage in comparison to the accredited civilized societies, reflecting Eurocentric biases while framing scientific progress through a new lens of cultural hierarchy.


During this same period, the British Empire had also expanded dramatically. This colonial expansion was largely facilitated by technological advancements in transportation such as steam-powered ships and locomotives. Ships in this era began to be built from iron and steel and sails were also replaced with steam engines and paddles with propellers, making it easier and faster to move goods and people across the sea.

At the same time, steam locomotives revolutionized transportation making it quicker and easier to also move goods and people across land. These innovations opened up new markets and changed how people thought about transportation and the world. The expansion of railroads, improved roads, and more efficient travel methods increased mobility for people and communities.


This greater accessibility allowed people from different backgrounds to gather in large urban centers where amazing cities were being built. As technology and social order made these metropolitan cities safer and more convenient, they became hubs for larger social institutions and communities. This led to the development of modern concepts like schools, courts, theaters, libraries, post offices, and shopping centers.


Advancements in lithography and photography, were also making it possible to copy, print, and share images and maps in new ways that revolutionized teaching, advertising and media. Returning to our discussion in part eight, we can observe how the media narratives surrounding science and technology were influenced and, in turn, shaped by the prevailing social norms.

Over time, the idea of traveling the globe became a normal seeming activity, and people started dreaming of even more extraordinary adventures and epic journeys to distant lands. Writers like Jules Verne, who published "A Journey to the Centre of the Earth" in 1864 and "Around the World in 80 Days" in 1872, reflect the role exploration played in social norms at the time, and the shifting values around science and technology in the view of bureaurcratic accreditation bodies. While Lewis Carroll's 1865 novel "Alice's Adventures in Wonderland" reflected this theme of journeying to fantastical lands and deviating from the societal pressures and norms.


These were writers who were schooled in a Victorian era that valued a high standard of personal conduct across all sections of society, and at a time when literacy and childhood education became universal standards across Great Britain, reflecting the evolving norms of the State, Church, and the Crown. During this time, science and exploration became this mythologized force, where it was viewed as both miraculous and dangerous.


The beginning of the century reflected a dissonance regarding a growing fear of machines and the lengths that man was willing to go to in the name of scientific validation. This inspired new stories like “Frankenstein” by Mary Shelley (1818), mirroring these fears of losing control over technological creations.


Scientists were usually depicted as daring figures who pushed the boundaries of what was possible by risking everything to do it. Again, Dr. Frankenstein comes to mind, but also characters like Captain Nemo in "20,000 Leagues Under the Sea," also by Jules Verne or the dichotomy between Robert Louis Stevenson's characters Dr. Jekyll and Mr. Hyde.


Towards the end of the century, the fear of humanity's own evolutionary future became realized in stories like H.G. Wells’ "The Time Machine" (1895), where humans evolve into entirely new species. We can read these now and understand that these narratives all reflected society’s excitement and fear about the speed of change, the power of technology, and what it all meant for humanity’s place in the world at that time.


As the 19th century gave way to the 20th, the pace of scientific discovery only accelerated, and deepened anxieties about the cost of progress. In 1900, the rediscovery of Mendel’s genetic theories (cells and chromosomes were yet to be understood enough to give Mendel's abstract ideas a physical context until then,) introduced the promise of understanding life’s blueprint, though this quickly veered into darker territory with the rise of eugenics. This era in human history is unfortunately filled with instances of unethical experimentation conducted in the name of progress driven solely by the pursuit of further advancement.


The early 20th century witnessed global turmoil, particularly with World War I (1914–1918), which profoundly reshaped European society. The advent of automobiles and roads transformed countries and cities, fundamentally altering our physical connections. Cities and governments increasingly distanced themselves from churches and monarchies as conflicts between nation-states shifted their focus to industrial dominance. Moreover, the schismogenesis between democracy and totalitarianism created unprecedented clashes.


In response to this disillusionment following World War I and II, as well as the existential crises arising from modernity, industrialization, and secularization, existentialism emerged as a philosophical movement. This became dominant as thinkers became focused more on individual freedom, choice, and responsibility and the horrors of the Holocaust and atomic warfare amplified people's questions about meaning, morality, and human freedom.


Einstein’s theories of relativity revolutionized our understanding of space and time, inspiring awe while also raising profound questions about humanity’s place in the vast universe. This coincided with the unsettling emergence of quantum mechanics in the 1920s, marked by pivotal developments such as Werner Heisenberg’s formulation of matrix mechanics, Erwin Schrödinger’s wave mechanics theory, and the establishment of the Copenhagen interpretation, primarily attributed to Niels Bohr and Heisenberg. These groundbreaking theories laid the groundwork for comprehending the behavior of matter at the atomic level, revealing mind-bending concepts like uncertainty and parallel realities.


Additionally, Freud’s psychoanalysis further illuminated the subconscious mind and humanity’s darker instincts, leading to a series of transformative changes and new assumptions about the human psyche and personal development.


The Cold War and space race created narratives that mixed hope for progress with fears about how things could go wrong. The Manhattan Project and the development of nuclear weapons heightened the apprehension surrounding these narratives, as people internalized the belief that these weapons possessed the capability to annihilate the entire world. Nuclear power also became a symbol of humanity’s power to create and destroy, like in movies like Dr. Strangelove and Godzilla.


These scientific revolutions cast long shadows, reflecting public fears that science could manipulate life, morality, and even the human soul. As factories hummed with mass production and automation reshaped labor, the mechanization of the early 20th century’s technological advances brought convenience and efficiency, but they also sparked dystopian visions, like "Huxley's Brave New World" (1932), that warned of a dystopian world where the dehumanizing effects of medical breakthroughs and assembly lines influenced new portrayals of totalitarian regimes.


In the 1950s and 60s, rockets and space discoveries made people think about what they could do and what dangers they might face in space. Space exploration became the modern-day Odyssey. Depictions of artificial Intelligence became to rival to human intelligence, with AI being shown as smart but without feelings, like in HAL 9000 in "2001: A Space Odyssey." This idea also inspired stories like "I, Robot" by Isaac Asimov, which came out in 1950.


For many, this shift culminated with Thomas Kuhn’s "The Structure of Scientific Revolutions" in 1962, which is known for initiating the argument that science wasn't purely objective but actually influenced by social and historical contexts. Kuhn’s work opened the door to structurally examining how political, economic, and cultural factors shape scientific advancements in the same manner that Kant's notion of synthetic a priori judgments influenced the way scientists formulated theories.


At this point, it became hard to dismiss the role science and technology play in our lives. The impact that technological innovation had in the 20th century was profound. Airplanes and cars emerged and improved rapidly, radio communication technologies proliferated, mechanical automation influenced mass-produced mindsets, and innovation sprang across agricultural, architectural, medical, and other fields.


Advancements in metallurgy, chemistry, and organic biology created all sorts of new methods for processing materials like producing aluminum and concrete, as well as new products like synthetic plastics and refrigerants that provided the life-saving means of keeping foods and medicine fresh for longer periods of travel or storage. The question started to become, well, how much does science and technology affect our lives?


The discovery of DNA’s structure (1953) and the rise in scientific interests in genetic engineering emerged at the height of the heroin epidemic and inspired stories about the dangers of mutated young children who'd become superheroes in comics like Spider-Man (1962) and X-Men (1963).


Spider-Man reflected the fear of genetic engineering and drug use with Peter Parker being bitten by a radioactive spider (literally having his veins injected with a poison that infects his body on a genetic level) created by corporate tampering, and the X-Men, as they're genetically marginalized, represented the real consequences of social stratification that occur from discrimination based on evolutionary theories and cultural biases around genetic differences.


The advent of personal computers and the Internet gave rise to cyberpunk narratives centered around virtual realities and hacking. Computing was portrayed as both liberating and oppressive, highlighting the blurring of public and private spaces. The proliferation of devices that generate and store more data led to dystopian visions of surveillance and digital authoritarianism, as well as new narratives of cybernetic worlds where human agency is constantly threatened by corporate or AI dominance, as depicted in films like “Blade Runner” (1982), “The Terminator” (1984), and “RoboCop” (1987).


These literary examples are meant to show how scientific advancements transformed both society and imagination throughout the 20th century, and how they also shifted how humanity viewed science itself. What had once been seen as a purely rational and objective pursuit increasingly came to be understood as a human endeavor that was deeply influenced by the social, political, and cultural environments in which it developed. While the scientific method continued to drive discovery, it became clear that the way humans interpreted, applied, and even prioritized scientific knowledge was also shaped by broader societal contexts.


New schools of thought began reinterpreting scientific achievements by focusing on observations and on the contextual influence of gender, race, class, and politics in the development of scientific theories. Rather than being seen as just rational and neutral, scientists were now starting to be understood as products of their time, naturally embedded in complex social systems that shaped them and their work. This became especially apparent during the Science Wars of the 1990s, where scientific realists, those defending science as being rooted in objective evidence, clashed with the postmodernists and sociologists who were arguing that science is shaped by the social and institutional contexts in which it operates.


A notable example of this epistemic debate was the Sokal Affair of 1996, when physicist Alan Sokal published a parody article in the journal "Social Text." In his article, titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” Sokal argued that quantum gravity was a social construct, deliberately using jargon and nonsensical arguments to critique the postmodernist approach to science. The journal published the article without any academic peer review and when Sokal later revealed the hoax, it exposed the rift in credibility between those who viewed science as objective and those who saw it as just a social construct.



The fallout from the Sokal Affair revealed a deeper struggle to reconcile two fundamentally different ways of understanding the role of science in human life. On one side was the view of science as a beacon of objective truth, untethered from human biases, and on the other, an acknowledgment that science is inherently shaped by the cultural, political, and historical forces that surround it. This divide mirrors a broader tension in how humanity sees itself in relation to the world: as separate from nature or as deeply embedded within it.


With How Far We've Come- Are We There Yet?

You can still see this detachment everywhere in modern life. We build towering skyscrapers, develop mind-blowing technology, and create sprawling cities, all while treating the natural world as if it's just something we should either protect or destroy, but not something we’re part of. But the truth is, we are nature. As Neil deGrasse Tyson emphasizes in his book "Cosmic Perspectives", we are made of the same elements as the stars. Every atom in our bodies was forged in the heart of a star billions of years ago. We're literally stardust. Our technologies, cities, and innovations are just uniquely human expressions of the same creative force that drives all life, the human versions of bird nests, ant hills or beaver dams.


In part eight, we discussed how astronomy was once rooted in ancient terrestrial observations, but it has since evolved thanks to advances in science and technology. Today, astrophysicists, like Neil deGrasse Tyson, use concepts from multiple disciplines (like classical mechanics, quantum mechanics, thermodynamics etc.), to understand the observable universe. These scientific achievements have given us incredible insight into the cosmos, but they’ve also perpetuated the idea that we stand apart from the natural world we study.


Historian Yuval Noah Harari captures this tension perfectly in his books, like "21 Lessons for the 21st Century" and "Homo Deus". He explains how the stories we tell about technology feed into an illusion of control. We imagine ourselves as masters of the natural world, transcending biology and bending nature to our will. But this mindset ignores a simple truth: our technologies, for all their brilliance, don’t make us gods. They just show how deeply we’re embedded in the same interconnected web of life that’s been evolving for billions of years.


This illusion of control is especially clear in the 21st century’s most transformative fields. AI and machine learning inspire both excitement and anxiety about intelligence and ethics in stories like "Ex Machina" and "Westworld." Biotechnology breakthroughs like CRISPR have sparked debates about bioethics and human enhancement, as seen in the different classes depicted in "The Hunger Games".


Climate science, meanwhile, has forced us to grapple with our role as both creators and destroyers of environmental balance, fueling narratives of survival and stewardship in works like "Snowpiercer". Let’s not forget quantum mechanics and computing, which open doors to mind-bending possibilities about multiverses and nonlinear time in stories like “Tenet” or the numerous new interpretations of the “Groundhog’s Day” motif, where protagonists find themselves trapped, reliving the same cycle repeatedly.


At their core, these advancements—and the stories we tell about them—reflect our ongoing struggle to understand our place in the bigger picture. Are we separate from the systems we create, or are we just one piece of a larger natural process?


This recognition is important because it challenges the long-standing idea that we can ever objectively observe the world without bias. But, as fields like neuroscience and cognitive science show, our cognitive and social frameworks influence how we perceive and interact with the world. Every observation we make, every action we take, feeds back into the systems we’re studying. This shift in thinking has been profound for me, not just changing how I view science but how I see myself in relation to the world.


I’ve come to understand the importance of social validation in how we experience the world. Accreditation, through its processes of social approval and disapproval, shows how we shape and change our narrative selves in social situations. That’s why I believe it’s a key part of the stories that make up our conscious experiences. In part seven, we talked about the significance of developmental psychology and the work of thinkers like Freud and Erikson, who helped build our understanding of how people adapt and change throughout their lives.


Neuroscience has since made significant strides in attempting to deepen our understanding the self, revealing how our brains are deeply connected to the rhythms of the world around us. As neuroscientist and author, David Eagleman explains in his book "Livewired", our brains constantly adapt to the environments we create, reflecting the neural plasticity that ties us to the natural world, even in our most “artificial” settings.


Cognitive scientist, Donald Hoffman, argues that our sense perceptions have actually evolved to hide reality from us, making us see only what we need for survival rather than the full complexity of existence. This debate echoes David Chalmers’ hard problem of consciousness. Chalmers' argument is that even as we make strides in understanding neural correlates and cellular brain functions, consciousness still remains a mystery that transcends our knowledge of any physical systems.



The question of how the mind relates to the body, and thus the world around us, is just a modern example of the broader debate about our relationship with nature and as we move towards even more advances in biology and cognition, it's going to get even more weird with simultaneous progress happening in synthetic biology and artificial intelligence.


For so long, we’ve thought of ourselves as masters of nature, shaping it to our will. But in reality, it seems that we’re just another species trying to survive and thrive within it. The homes we build, the technologies we create, and the cities we construct are expressions of nature, not something separate. The materials we use and the energy we consume all come from the Earth. We haven’t transcended the natural world; we’ve just convinced ourselves we have.


Our myths and rituals reflect this anthropomorphic dualism. As we embrace new media and communication styles through emerging technologies, we further the divide between our relationship with ourselves and nature. We start treating living animals as just pets and livestock, and we generalize plants and other living beings to fit our idea of a natural order. This process of anthropomorphizing and commodifying nature has caused us to lose sight of the interconnectedness and interdependence that is necessary sustain life on Earth. By reducing living beings to mere objects or resources, we devalue the natural world and diminish our own sense of belonging and harmony within it.


And this is where the disconnect starts to feel so overwhelming, especially when we look at the state of the planet. Climate change, biodiversity loss, resource depletion; these are the direct result of us thinking we’re above the systems that sustain us. We frame these challenges as if the planet is something we can save or destroy. But, the Earth will keep going with or without us. The real question is whether it will remain a place we can live.


The more I’ve learned, the more I’ve come to understand that the world really is a kind of workshop, where we can innovate, build, and push the limits of what’s possible. And while that mindset has led to some of humanity’s greatest achievements, it’s also led us to believe that we can separate ourselves from nature. But I think my generation has the tools to realize that every innovation, every piece of technology we create, feeds back into the same system and changes us, for better or worse. We need to stop thinking we’re above nature and start recognizing that we’re part of it. And that’s where the future of social accreditation comes in. We need to validate the actions we take, the technologies we build, and the systems we create based on how well they align with this understanding of interconnectedness.


 

We're All Connected: Its Networks All the Way Down


People love to say humans are like a cancer on the planet. I don’t buy that. I think we’re more like an immune system that's powerful, reactive, sometimes limited and destructive, but still capable of healing and protecting when we're focused on the right goals. Our challenge now is to stop seeing ourselves as conquerors and start seeing ourselves as co-inhabitants. We’ve built amazing cities, we have great tech, and are privileged to unbelievable medical advances but we’ve stopped short at learning how to nurture the framework that connects it all.


In previous blogs I've mentioned that in Bruno Latour’s Actor-Network Theory (ANT), technologies are active participants in a network of human and non-human actors that continuously shape and reshape societal norms. I've been waiting to talk about ANT in more detail since back in part four where we discussed how Social Networking Theory focuses on the structural analysis of human-centric networks.


Both theories view networks as central to understanding relationships but where SNT offers precise tools to measure and visualize large-scale patterns in networks, ANT provides deeper insights into the processes and roles of non-human actors. ANT is used more to explore how technologies, systems, and social practices co-evolve.


To get a better understanding of how social accreditation influences broader social systems and nonhuman entities, we can use ANT to understand how networks form, evolve, and stabilize. As an example, let's consider medical technology, like the development of a new medication.



The drug discovery process involves numerous actors: researchers conducting experiments, pharmaceutical companies providing funding, regulatory bodies overseeing trials, and doctors administering the drug. Once the drug is released, its acceptance is first determined by explicit accreditation through its chemical composition or clinical efficacy. But once it's released, its success really depends on how it’s implicitly perceived and integrated into this network of actors.


This helps us understand why some technologies succeed while others fail, even when they seem equally viable on a technical level, just like with the iPhone and Segway example from earlier. A medical innovation’s success hinges on more than just solving a biological problem, but also on existing within this particular network of actors. Each actor's contributions to how the technology is accredited or rejected reflects how the technologies participate in shaping and reshaping the very norms and practices of the systems they enter.


Now, here’s where I want to bring in Gilles Deleuze and Félix Guattari’s ideas about rhizomatic structures, machines, and flows to deepen our understanding of how technologies and scientific ideas spread.


Deleuze was a philosopher known for exploring difference, desire, and multiplicity, and Guattari was a psychoanalyst who was interested in social systems and collective behavior. They collaborated during the political upheavals of 1960s and 70s France and their work challenged the rigidity of institutional systems and hierarchies, instead exploring models of decentralized, dynamic growth that apply remarkably well to modern networked and digital contexts.


One of their most influential ideas is the concept of machines. For Deleuze and Guattari, machines aren’t just physical or mechanical devices but any systems, processes, or structures that produce connections and outcomes. In this broader sense, Latour’s actors can be seen as machines: every actor—whether a person (like doctors or patients), a technology (like medication), or even microorganisms (like bacteria)—functions as a machine within a broader network, continuously producing connections and interactions.


But these broader networks can also become machine-actors themselves. As they grow and gain validation and agency from implicit social accreditation processes.


Deleuze and Guattari also introduce the rhizome as a metaphor for non-hierarchical, decentralized growth. Unlike a tree, which grows from a central trunk and branches out predictably, a rhizome spreads underground without a clear center, direction, or hierarchy. This model reflects how scientific ideas and technologies spread today: rather than following linear pathways, they tend to take unpredictable routes, branching across diverse actors and networks. While some networks are hierarchical, like the traditional bureaucracies we talked about in part ten; others, especially in our increasingly digital world, resemble rhizomes, where information and validation can flow freely and unpredictably.


The concept of flows is equally central in their work. Building on our earlier discussion in part five about cultural transmission, the concept of flows can represent the dynamic, non-linear movement of knowledge, behaviors, and social validation through networks, which mirrors the mechanisms of cultural transmission. Just as vertical, horizontal, and oblique transmission shape how cultural traits are passed down or shared among individuals, flows capture the continuous, adaptive exchange of validation that influences whether certain cultural norms are reinforced, challenged, or redefined.


Technologies like vaccines or new medications aren’t accredited by a single authority alone; their acceptance flows rhizomatically across different actors in the network. Regulatory bodies, public health policies, media outlets, community health choices, and personal beliefs all play their roles, with each interaction in the network strengthening or weakening the collective accreditation of the technology.


In this way, social accreditation itself is a kind of flow, moving through a network of actors who constantly shape, reshape, and renegotiate its legitimacy. Traditional accreditation by definition, has to be controlled by centralized, authoritative institutions such as academic journals, regulatory bodies, and industry gatekeepers.


But, rhizomatic accreditation processes offer a way to bypass these gatekeepers by enabling decentralized, network-driven validation. This is how technologies and ideas gain momentum from the grassroots level, they're driven by diverse actors like users, communities, independent experts, and online platforms, who all collectively contribute to the technology’s legitimacy.


By framing social accreditation within a rhizomatic structure, we can see how these processes have the potential to disrupt entrenched power dynamics and create space for new forms of social organization. The ability for technologies, ideas, and social movements to gain traction without the need for top-down validation challenges the authority of traditional institutions, enabling more fluid and inclusive forms of accreditation.


To illustrate how these concepts play out in real life, I want to go back in time to the story of Ignaz Semmelweis, a Hungarian physician who, in 1847, discovered that handwashing could drastically reduce mortality rates in hospitals. Semmelweis observed that women in maternity wards attended by doctors and medical students had much higher mortality rates than those attended by midwives. He deduced that the doctors were unknowingly transferring infections from autopsy rooms to maternity wards because they weren’t washing their hands between procedures.


Semmelweis’ discovery was groundbreaking, but it was initially rejected by the rest of the established medical community. This was partly because he couldn’t provide any scientific explanation for his findings, and also because his abrasive personality alienated him from many of his colleagues. But most importantly, his idea clashed with the prevailing social and medical norms. The thought that doctors themselves could be the source of disease was a direct challenge to the established hierarchy. Semmelweis’s idea threatened the network of actors who controlled medical knowledge, and without their accreditation, his breakthrough was ignored for decades.


It wasn’t until Louis Pasteur’s work on germ theory gained traction in the 1860's that the network shifted. Once germs became known to the public, they also became central actors in the medical landscape. Hand hygiene became a common practice and was enforced through health codes and medical protocols. The flow of scientific knowledge reshaped how doctors, patients, and germs interacted, turning handwashing into a medical and social obligation.


Our old friend Roland Barthes offers another helpful parallel here to see how this dynamic was pushed even further. Remember from part five that Barthes famously explored how everyday objects, like soap, can take on symbolic meaning as myth.


For Barthes, soap wasn’t just a practical tool, it’s a symbol of purity, civility, and order. In this way, our hygiene practices, like handwashing, have become more than just medical necessities; they're now moral and social standards that are deeply embedded in our cultural mythology. What started as a marginalized idea from Semmelweis has evolved, through the validation of scientific networks, into a social norm and moral obligation.


We can deconstruct this further. As Barthes showed, soap has become a tool in our cultural war on germs. The act of washing our hands or using antibacterial products is about more than cleanliness; it’s framed as a battle for purity and safety. This warlike metaphor permeates public health campaigns, advertising, and everyday routines, reinforcing the norm of hygiene in a constant feedback loop.


In this context, we can see a process of schismogenesis between us and the microorganisms we call “germs.” Over time, we’ve developed a tribal form of aggression against germs, framing them as the enemy in our quest for cleanliness and health. This framing has shaped everything from our medical discourse to the technologies we develop, influencing how hygiene has become ritualized in everyday life.


By anthropomorphizing germs as dangerous invaders and positioning ourselves as defenders of health, we reinforce the separation between “us” and the natural world. This schismogenesis doesn’t just impact our relationship with microorganisms; it shapes the technologies we develop, the policies we enact, and the products we use, all of which contribute to the way hygiene has become moralized and ritualized in everyday life.


Think about how the parallels between the world of biological pathogens and the digital realm are strikingly apparent in the way we conceptualize and interact with technology. We refer to bad computer programs as bugs and viruses. We clean and wipe our drives, empty unused files into the trash can, etc. In this sense, our conceptual mythology behind cleanliness and microorganisms has even shaped the way we use and create hardware and software.


Technologies, germs, doctors, and our even cultural affiliations are all actors within a rhizomatic network and they each function as a machine that produces connections, reshapes flows of knowledge, and influences societal norms. The success or failure of a technology isn’t just a matter of its technical merit but of how well it navigates this complex, interconnected system of actors and flows. These flows adapt with social norms and knowledge across various forms of cultural transmission, but remember that accreditation can come in the form of validations and sanctions. As with the Segway, implicit sanctioning of technology can damage its reputation causing it to fail, regardless of its utility or function.


Let's consider the recent increase in the public's resistance to vaccines. While vaccination technology has saved countless lives over the years and is generally backed by scientific evidence, its the cultural attitudes within social networks, healthcare institutions, and governments that have played a significant role in determining whether vaccines are widely adopted and accepted.


Anti-vaccine movements are rooted in a mistrust of institutional authority. We touched on this in part eight when discussing “conspirituality” and the spread of cult-like conspiracies. While these movements can be driven by misinformation, they also tap into some uncomfortable truths.


A big part of this mistrust comes from real historical events. Incidents like lab leaks and the ongoing debate over gain-of-function research, where scientists intentionally modify viruses to try to understand potential threats, have given cause to what I think is genuine concern about the unforeseen risks of certain scientific advances.


Though these occurrences are rare, they expose the inherent risks of scientific experimentation, especially when transparency is lacking. The whole lab leak theory surrounding COVID-19, and past incidents like the Ebola outbreak or the H1N1 swine flu virus, have heightened public wariness, showing how our gaps in understanding and communication can fuel skepticism toward scientific authority.


On one hand, we rely on the benefits of modern science and technology, like vaccines that protect us from deadly diseases. On the other hand, we're ferociously anxious about the risks these same innovations might carry, since they involve complex processes we don’t fully understand. This tension creates fertile ground for distrust, especially when people feel disconnected from the institutions responsible for these technologies.



The challenge is that while we benefit from technologies like vaccines, most of us lack a full understanding of how they work or are developed. This gap between what we rely on and what we know can stoke fear of the unknown, making it easier for conspiracy theories, collective illusions, and anti-science sentiments to spread. Here, the actors are researchers, government agencies, healthcare workers, conspiracy theorists and even the viruses themselves and they're all part of a complex network. When the flow of trust in one part of the system breaks down, the entire network is cast into doubt.


At the core of this mistrust is discomfort with the bureaucratic authority that governs modern systems like science and technology. People can't trust the vaccines themselves if they always feel uneasy about the institutions responsible for creating and distributing them.


This disconnect underscores why social accreditation is so critical: it’s not just about proving a technology works, but about fostering trust in the actors involved in delivering it. The story of vaccines is a perfect example: despite overwhelming scientific evidence supporting them, public acceptance still hinges on the credibility of the institutions involved and the narratives that shape public perception. When that trust falters, even the most scientifically sound innovations can face public resistance.

 

General-Purpose Technologies and Technological Supercycles



So far, we’ve explored how technologies are embedded in these vast, interconnected networks of validation, adoption, and sometimes, resistance. But when we start talking about general-purpose technologies (GPTs), we’re talking about innovations that are so fundamental that they spark waves of further advancements across pretty much every aspect of life.


Think of electricity, radio, or the internet. These tools have become the foundation of entire industries and economies that proliferate across borders and generations. What futurist, Amy Webb calls technology supercycles, long waves of sustained innovation that come directly from the impact of these GPTs.


General-purpose technologies act as machines within the rhizomatic network of actors we’ve discussed, continuously reshaping flows of knowledge, capital, and social norms. They spread in unpredictable ways, with new innovations emerging from unexpected places and it’s this non-linear, interconnected growth that makes GPTs so transformative because they don’t just solve problems; they fundamentally reshape how society operates.


Take electricity. It didn’t just light up homes and cities, it also revolutionized other processes like manufacturing, transportation, and communication. But it wasn’t just the technology that made this happen. The true impact of electricity came from the feedback loops between businesses, governments, and consumers. Each of these actors interacted with and built upon the technology, allowing it to spread and reshape entire industries. It wasn’t just about having light and power; it was about adapting to a new way of living and working that was organized around this new capability.


The same thing happened with the internet. Of course, it’s a groundbreaking technology, but its true power comes from the way that governments, businesses, and people interacted with it and expanded it. The internet wasn’t just a technical achievement of connecting some computers and routers, it was the interactions and flows of trust, capital, and culture that turned it into the backbone of modern life. The virtual embodiment of Latour's network and Deleuzian rhizomes.


When Amy Webb talks about technology supercycles, she’s referring to these revolutionary periods of sustained progress where the accredited integration of certain technologies can reshape our industries, economies, and societies in the process. This isn’t new, either. We saw it happen during the Industrial Revolution with technologies like steam power and mechanized production. These transformitive industrial advancements led to fundamental shifts in how people worked, where they lived, and how economies were structured.



But technological supercycles are complex, dynamic processes that rely on the continuous interaction between actors, machines, and flows. A great example is the development of the railroad industry in the 19th century, which didn’t just facilitate faster travel, but also required societies to adopt entirely new norms and standards.


Before the railroad boom, time was viewed as a local matter that was just dictated by the position of the sun in individual settings. But with the rise of long-distance rail travel, uniform timekeeping became necessary to ensure the smooth coordination of train schedules.


The creation of standardized time zones was a direct result of the railroad’s need for a synchronized schedule across vast distances. This shift completely changed how people thought about time, space, and distance, leading to new social norms around punctuality, organization, and the pace of life.


The actors involved in this innovation were numerous: government regulators played a crucial role in clearing land, establishing infrastructure and new legal frameworks, while private companies invested in the construction of railways, stations, and engines. Labor unions emerged to advocate for workers’ rights as the rail industry grew, and workers clashed with corporations over wages and unsafe conditions. And, of course, consumers, as in both passengers and businesses, who shaped the trajectory of the railroad by consistently demanding faster, more efficient transport.


Each of these actors interacted in the broader network, influencing how railroads were accredited, expanded, and integrated into society. Entire industries like agriculture and mining were revolutionized by the ability to transport large quantities of goods across countries and other large distances, leading to urbanization and the growth of cities.


Cultural norms around what was considered “far away” or “distant” shifted as trains allowed people to travel to new regions in a fraction of the time previously required. This, in turn, reshaped societal ideas about travel and migration as people were no longer bound to the place of their birth.


But by the early 20th century, the dominance of railroads began to decline. It was disrupted by the rise of a new technology that redefined these same flows of time and space.


The introduction of personal motorized vehicles fundamentally shifted the relationship between people and machines. Before Henry Ford and the Model T, cars were just a luxury for the wealthy. Ford’s assembly line revolutionized production because it drastically lowered manufacturing costs and made cars more accessible to the average person. This shift didn’t just transform mobility; it redefined freedom, space, and how our societies functioned.


This is how it works with most breakthrough technologies: as they become cheaper to produce and easier to distribute, they spread. What starts as a luxury becomes a part of everyday life. This is also why GPTs like electricity or the internet go from niche to mainstream so quickly. At first, they’re expensive and limited, but as production costs drop and access grows, they become embedded in everything we do. The cheaper and more available a technology is, the faster it scales and transforms industries, societies, and the way we live.


Cars offered a new level of personal freedom and flexibility that railroads couldn’t by allowing people to travel whenever and wherever they wanted instead of being dependent on fixed rail lines and schedules. This marked another shift in social values, from collective travel and synchronized systems to individual mobility and personal autonomy.


But the car didn’t just change the way people traveled, it disrupted the entire network of actors that had previously sustained the railroad industry. Governments shifted their attention away from train tracks toward building highways and roads. Consumers stopped relying on trains for long-distance travel and began buying more personal vehicles.


The transition from railroads to automobiles also changed the flows of capital, labor, and resources. Capital investment moved away from railway systems and into automobile production and road construction. The design of cities shifted, with urban planning becoming more car-centric, leading to the construction of suburbs, parking lots, highways, and gas stations. Labor flows were also affected since jobs moved from rail construction and maintenance to manufacturing cars, working gas stations, and building highways and roads.



Adopting the car didn’t just reshape the way we travel or replace railroads, it required the development of new regulatory frameworks that govern everything from manufacturing to maintenance, safety, ownership, and environmental impact. These explicit regulations didn't come from any single authority but instead emerged through the interactions between various actors within the broader network.


For example, regulating auto manufacturing typically involves government bodies setting standards for emissions, safety features, and environmental impact, while car companies must comply with these regulations to bring their products to market. Simultaneously, repair shops and maintenance industries developed in response to the demand for car upkeep, and these, too, are subject to quality standards and licensing rules. Driving safety regulations rely on law enforcement agencies to ensure strict adherence to traffic laws, while insurance companies mitigate the risks associated with car ownership and driving.


Even the process of buying and selling cars is governed by an explicit set of rules, including vehicle registration, taxation, warranties, and consumer protection laws. All of these regulations form part of a broader flow of information, trust, and validation, where governments, corporations, consumers, and labor unions negotiate how cars should be produced, sold, and used.


The regulation of GPTs like the car depends on how effectively these flows move through the network, aligning the interests of different actors while managing the risks and challenges posed by the technology. Consider how traffic laws evolved in response to the increasing number of automobiles on the roads, with city planners, police forces, and municipal governments all working together to create a system of signs, signals, and rules to manage traffic flow and ensure public safety.


In this way, the regulation of GPTs, like cars, relies on the rhizomatic network of actors. These actors can include consumers, repair services, and environmental advocates as well as government bodies and corporations. Each actor functions as a machine within this broader network to help shape the way cars are produced, maintained, sold, and used.


This regulatory process underscores an important tension in the development of general-purpose technologies: the growing divide between technologies that emerge from open-source ecosystems versus those developed in competitive, closed environments.


 

Good Actors, Bad Actors, and the Consequences of Open Innovation


The importance of open innovation in today’s tech cycles can’t be understated. Bringing back the smartphone revolution as an example, we know that while the iPhone was a breakthrough technology, its success didn’t rest on Apple alone. It relied on an entire ecosystem of app developers, component manufacturers, and network providers working together.


The twist here is that Apple’s ecosystem itself isn’t actually “open.” It’s built on collaboration, yes, but it’s also heavily controlled and siloed by Apple's bureaucratic corporate policy. App developers must follow strict guidelines, and much of the iPhone’s hardware and software is proprietary. This dynamic highlights a core tension in tech today: the push-and-pull between openness and control.


Historically, many technological advancements came from zero-sum geopolitical competitions. Think back to WWII and Alan Turing’s work on cracking the Enigma code, which laid the foundation for our understanding of computing. Turing's team and their innovations had to happen under a cloak of secrecy because open collaboration was too risky during the war. The same was true for the Manhattan Project at Los Alamos, where scientists like J. Robert Oppenheimer spearheaded the creation of the atomic bomb in total secrecy. During the Cold War, the U.S. and Soviet Union competed in a high-stakes Space Race, that increased innovation but again, under intense secrecy to gain the upper hand. In these cases, it was tightly controlled environments that drove the breakthroughs, but at the cost of openness and public consensus in the debate on how far should we be willing to go.



Today, this dynamic is echoed in the U.S.-China tech rivalry over semiconductors and AI. The semiconductor race is driven by protectionism, trade restrictions, and national security concerns. As computer chips become embedded in everything from cars to household appliances, control over chip design and AI capabilities has become a critical issue for both countries. This competition leads to slower global innovation while accelerating advancements within heavily guarded national borders. It’s a zero-sum game that pushes companies and governments to prioritize control over collaboration.


The recent rise of NVIDIA is a prime example of how this race for dominance plays out in the corporate world. As AI technologies are becoming essential across different industries, NVIDIA’s hardware has become the equivalent of gold, propelling its valuation to over $3 trillion, surpassing even Apple. Because of both the hype and real potential of AI, NVIDIA now holds over 15,000 patents, with more than 76% of them actively in use.


They really only compete with a small circle of major players like AMD and Intel, which is why they've made so much money so fast. NVIDIA’s dominance underscores the oligopolistic nature of the tech industry. In these networks, only a few actors control the market landscape, creating significant barriers to entry for newcomers.


This isn’t just a semiconductor issue, most of all U.S. industries are increasingly dominated by a few giants. These oligopolies consolidate power by stifling competition and limiting access for smaller innovators. As a result, technologies that could benefit society at large now remain locked within proprietary walls.


For example, in telecommunications, Verizon, AT&T, and T-Mobile collectively control over 99% of the U.S. wireless market, making it nearly impossible for new competitors to enter. In banking, the “Big Four” (JPMorgan Chase, Bank of America, Wells Fargo, and Citigroup) hold nearly half of all customer deposits in the country, leveraging their scale to influence regulations and pricing. Similarly, Disney’s acquisition of 21st Century Fox in 2019 solidified its dominance in media, allowing it to set content trends while limiting diversity in the market.


This dynamic reinforces cycles of control and centralization, where even potentially democratizing innovations are constrained and technologies that could expand access to knowledge, healthcare, or sustainable infrastructure are instead optimized for corporate profit, often through planned obsolescence. Which is when products are deliberately designed with short lifespans or locked into closed ecosystems, to ensure that consumers remain dependent on the dominant players while smaller innovators are left without pathways to compete.

These oligopolies use strategies like vertical integration to control supply chains, predatory pricing to squeeze out competition, and exclusive agreements to dominate access to markets. They also deploy immense lobbying power to influence regulations in their favor, effectively shaping the playing field to maintain their dominance.


The debate between open and closed systems in technology reflects deeper social dynamics that are at the heart of Social Accreditation Theory. On one side, there’s the ideal of solving global problems through open, collaborative networks that encourage shared knowledge. On the other, we see the push for control and profit, where proprietary systems prioritize strategic dominance.


To understand how this debate has shaped society, we need to revisit the cultural shifts of the 1980s, when technology began seeping into everyday life and reshaping social norms. This was the era when the “tech geek” went from being an outsider to a celebrated figure to a danger to society, all driven by a dynamic process of social accreditation.


The boy geniuses who could change the world with a few lines of code also became, in the public’s eye, potential threats. The same cultural moment that celebrated hackers for their ingenuity also painted them as figures to fear. People tinkering with systems they couldn’t fully control, potentially endangering society in the process. The backlash against hackers grew, and soon the FBI was raiding basements and hunting down teenagers who were creating chaos by merely experimenting with code.


Movies like "WarGames" and "Weird Science" captured this dynamic by portraying their main characters as young hackers and inventors that were both celebrated and feared. David Lightman, the teenage hacker played by Matthew Broderick in "WarGames", is a great example of how implicit accreditation within a subculture influences explicit behaviors and norms outside of that in-group.


David's character is a bored kid who seemingly can't be bothered with the slow-paced bureaucratic environment of school. He embodies the 80's image of the tech enthusiast. In the movie, he sees an ad for an upcoming video game in a magazine and sets up an elaborate hack to try to get early access to the game.


After he changes the grade of Ally Sheedy's character, Jennifer, her admiration towards him validates his hacker behavior and causes him to show off more by hacking into the game. Of course, he doesn't know that the game he's hacking is actually a back-wall to a military supercomputer running real war game simulations.


When David’s accidental near-triggering of a nuclear war results in immediate and explicit sanctions from the government, it serves as a clear indication to us as the audience of the structural hierarchies’ apprehension about the potential misuse of technology outside traditional institutions. This movie not only reflects the spread of norms surrounding technologies during the 80s but also played a significant role in shaping those norms. David’s character, a high school student who can outsmart military officials and even the fictional supercomputer, shows us how hacker/tech culture was perceived during that era.


We can see how this played out in real life as well. Robert Morris Jr., an American computer scientist, created the Morris worm in 1988 while a graduate student at Cornell University. His worm exploited vulnerabilities in early network systems and cloned itself, infecting computers across the Internet. Though the exact number of infected computers is unknown, it’s estimated that around 60,000 computers were connected to the Internet at the time, and the worm might have infected up to ten percent of them. Morris was prosecuted for releasing the worm, and became the first person ever convicted under the then-new Computer Fraud and Abuse Act. A friend of his that helped him conceive of the "brilliant project" as they called it, claimed that Morris created the worm simply to see if it could be done.


Cybersecurity author Scott Shapiro distinguishes between “downcode,” the computer code accessible to hackers, and “upcode,” the set of rules governing technology. The upcode encompasses our personal ethics, habits, organizational norms, legal standards, and industrial service standards that incentivize technology production or usage. This framework explains how and why hackers manipulate both implicit and explicit social processes.


Whether for good or bad, code developers are influenced by social accreditation processes. For example, Bill Gates and Paul Allen, the founders of Microsoft, met in high school and hacked their school’s scheduling software to make it so that Gates was the only boy in classes with girls, a hack reminiscent of David Lightman’s grade change to impress Jennifer in "Wargames."



The rise of tech founders like Steve Jobs and Bill Gates was a result of both implicit and explicit social approval processes. Initially, these guys gained recognition through organic networks of support from fellow tech enthusiasts, venture capitalists, and media outlets.


Steve Jobs and Steve Wozniak were both part of a Silicon Valley group called the Homebrew Computer Club, where they shared ideas and built computers in the ‘70s that eventually led to the creation of the Apple I. These early pioneers helped spread the myth of the garage startup, which represented a new American Dream. And then mainstream media, institutional investments, and capitalist success made these founders even more celebrated.


But this romanticized view of tech disruptors also raised some concerns. Just as quickly as the “tech savior” story came to light, so did worries about the dangers of giving so much power to a few people. Many articles, books, and movies have been written about the unintended consequences and even the potential harm that's been caused by these entrepreneurs.


The internet was designed to be a decentralized and open system, where anyone could join in, even though it started as a project within the military. The openness of the early technology of the internet allowed for an implicit social approval process that encouraged more innovative ideas to take off, like Tomilson’s email system or Marc Andressen’s Mosaic browser. But it also left security vulnerabilities and created privacy and ethical concerns that simply weren't foreseen or understood by the early users and developers who were just professors and students who wanted to find new ways to communicate and share ideas.


As the internet grew, the rhizomatic network structure began to give way to more hierarchical, closed networks being organized and dominated by a handful of powerful actors. Companies like Google, Facebook, and Amazon were created because people like Brin, Page, Zuckerberg, and Bezos wanted to use the internet to get around traditional power structures. But they ended up taking over.


These companies established control over their platforms, data, and algorithms and shifted the processes of online accreditation from a decentralized, community-driven model to one dominated by explicit sanctions and validations imposed by corporate gatekeepers. This centralization constricted the free flow of information, transforming the internet from an open, adaptive network into a series of walled gardens.


As Silicon Valley’s power grew, the initial ideas of openness and disruption started to clash with the real-world realities of closed systems that were meant to protect intellectual property and keep the market in their hands. Although Apple started as the underdog against IBM’s giant, they eventually became the giants themselves and moved towards a closed, tightly controlled system. They used their famous brand and patents to keep their dominance.


The consolidation of power among these tech giants has led to what some now describe as technonationalism and technofeudalism. Meaning that these companies now serve as quasi-sovereign machines, controlling both the flows of information and the processes of social accreditation. They have become the primary arbiters of what gets validated or rejected in our digital society. But these centralized systems aren't immune to backlash. As we’ve discussed, SAT emphasizes the role of self-correcting feedback loops in social accreditation.


The public’s growing disdain and distrust of tech bros and capitalist leaders like Mark Zuckerberg and Elon Musk is a sign that these feedback loops are still active. As these figures face increasing implicit sanctions from both regulators and the public, we're witnessing a recalibration of social norms. The same social networks that once validated them as disruptors are now questioning their impact on privacy, labor, and democracy.


The ongoing debate between open-source and closed systems is a battleground for social accreditation. Will we allow a handful of corporations to dictate the terms of technological progress, or will we reclaim the rhizomatic, decentralized ethos that once defined the internet?


As we look ahead, the battle between centralized and decentralized innovation will keep shaping our world. Big companies and governments can push forward with plans that meet their own goals or interests. But this approach can also stop the creative ideas that come from groups of people working together. Technology innovates best when groups can share ideas, build on each other’s work, and make things better.


The rise of AI, synthetic biology, and general-purpose technologies brings this tension to the forefront. But as we’ve seen, concentrated power carries its own risks, from stifling smaller innovators to creating ethical and security challenges that demand careful oversight.

Striking a balance will be essential, as the choices we make today will determine the role technology plays in shaping the next chapter of our collective future.


In upcoming posts, I’ll dive deeper into these questions, particularly how the accreditation of money brings these tensions into focus. We’ll also explore the need for new ethical frameworks to guide innovation, which can hopefully ensure that technology serves people—not just profits or political agendas.


Thanks for reading this!


If you like this blog, buy me a coffee! https://ko-fi.com/callmebryy



References/Further Reading:
  • Arendt, H. (1958). The human condition. University of Chicago Press.

  • Barrow, J. D. (1994). The origin of the universe. BasicBooks.

  • Chalmers, D. J. (2022). Reality+: Virtual worlds and the problems of philosophy. W. W. Norton & Company.

  • Chayka, K. (2024). Filterworld: How algorithms flattened culture. Doubleday.

  • Crease, R. P. (2019). The workshop and the world: What ten thinkers can teach us about science and authority. W.W. Norton & Company. Cukier, K., Mayer-Schönberger, V., & de Véricourt, F. (2021). Framers: Human advantage in an age of technology and turmoil. Dutton.

  • Eagleman, D. (2020). Livewired: The inside story of the ever-changing brain. Pantheon.

  • Gawdat, M. (2021). Scary smart: The future of artificial intelligence and how you can save our world. Pan Macmillan.

  • Graeber, D., & Wengrow, D. (2021). The dawn of everything: A new history of humanity. Allen Lane.

  • Harari, Y. N. (2016). Homo Deus: A brief history of tomorrow. Harvill Secker Harari, Y. N. (2024). Nexus: A brief history of information networks from the Stone Age to AI. Fern Press.

  • Hedges, C. (2009). Empire of illusion: The end of literacy and the triumph of spectacle. Nation Books.

  • Hoffman, D. D. (2019). The case against reality: Why evolution hid the truth from our eyes. W. W. Norton & Company.

  • Isaacson, W. (2014). The innovators: How a group of hackers, geniuses, and geeks created the digital revolution. Simon & Schuster.

  • Kellerman, G. R., & Seligman, M. E. P. (2023). Tomorrowmind: Thriving at work with resilience, creativity, and connection—Now and in an uncertain future. Simon & Schuster.

  • Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

  • Li, F.-F. (2023). The worlds I see: Curiosity, exploration, and discovery at the dawn of AI. Flatiron Books.

  • Mack, K. (2020). The end of everything (astrophysically speaking). Scribner. Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.

  • Musser, G. (2023). Putting ourselves back in the equation: Why physicists are studying human consciousness and AI to unravel the mysteries of the universe. Farrar, Straus and Giroux.

  • Schneier, B. (2015). Data and Goliath: The hidden battles to collect your data and control your world. W.W. Norton & Company.

  • Shapiro, S. J. (2023). Fancy Bear goes phishing: The dark history of the information age, in five extraordinary hacks. Farrar, Straus and Giroux.

  • Suleyman, M. (2023). The coming wave: Technology, power, and the twenty-first century’s greatest dilemma. Crown.

  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.

  • Tyson, N. d. (2022). Starry messenger: Cosmic perspectives on civilization. Henry Holt and Co.

  • van der Kolk, B. (2014). The body keeps the score: Brain, mind, and body in the healing of trauma. Viking.

  • Varoufakis, Y. (2024). Technofeudalism: What killed capitalism. Melville House Publishing. Webb, A. (2019). The big nine: How the tech titans and their thinking machines could warp humanity. PublicAffairs.

  • Webb, A., & Hessel, A. (2022). The genesis machine: Our quest to rewrite life in the age of synthetic biology. PublicAffairs.



Commentaires


bottom of page