Information Lifecycle: Creation

Introduction

In a previous post, What is Information, I wrote about how the concept of information was first introduced, and the definition of information. At the end of the post, I briefly touched on the information lifecycle, in a graphic that looked like this:

Information Lifecycle
Source: Information, A Very Short Introduction, by Luciano Floridi

I will be dedicating one post to each part of the information lifecycle. The purpose of the Information Lifecycle series is to indulge in a topic of deep personal interest and learn more about it – all with the greater goal of increasing my information literacy and sharing that learning process.

As the American Library Association defines it, information literacy is a set of abilities requiring individuals to “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information.” Being information literate includes the ability to distinguish fact from fiction. Source.

This post is dedicated to the very first, the top, part of the information lifecycle: Create/Generate.

What to expect

In this post, I cover:

  • If information is created or discovered
  • When information is created
  • How information is created
  • Why is information created
  • Who creates information
  • What happens when false information is created (information disorder)

Throughout all posts about the information lifecycle, I will a) uphold the General Definition of Information (GDI), and b) include a section dedicated to information disorder.

In this post, I will not cover:

  • Information formats (I’ll cover that in the next post)
  • Authorship (mentioned briefly in this post, but more to come)
  • Credibility (mentioned briefly in this post, but more to come)
  • Authority (mentioned briefly in this post, but more to come)
  • Value of information (more to come)
  • Trust (more to come)

Is information created or discovered?

This is a tricky question.

Consider research – if a biologist is studying, say, the larvae of a fruit fly for cross-generational genetic abnormalities and they finally find some, can they be said to create information or were they merely discovering it?

This is when the distinction between data and information comes in; a datum is a fact regarding some difference or lack of uniformity within some context. The biologist has discovered data. The context for that data (e.g. the abnormality only appears in 8% of the fruit fly population) and the subsequent organization of that data into a comprehensible structure (e.g. a graph or academic paper), turns the data into information (with well-formed and meaningful data). The biologist has created information.

Data can be discovered, generated, or observed. But information is created.

When is information created?

It could be argued that information is only discovered once. There are two problems with that view. It is very difficult, if not impossible, to know exactly:

  1. When that occurs – when was the original instance of that information created?
  2. If the meaning ascribed to the data is the same meaning ascribed to the same (or similar) data discovered by someone else.

I land somewhere in the middle. I think information is generally a one-time occurrence in the scope of all human knowledge, and it also occurs whenever a single human brain creates information. (While information is created by animal brains all the time as they learn from their experiences, I focus on the human experience in the information lifecycle.)

“Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience.”

Source: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

So we may perfectly well come away from experiencing the same data (or at least very similar data) and ascribe entirely different meanings to it.

How is information created?

The GDI says that an instance of information is considered information if it consists of one or more instances of data and if that data is well-formed and meaningful.

How has information been created in the past and how is it created now?

(Pst, they’re the same. History does indeed repeat itself.)

Poker dog gif

Here are the three ways information is typically created.

Direct experience

Direct experience is the first and fundamental way humans create information.

A human experiences an event. The brain synthesizes the sensory input (touch, smell, sight, sound, taste) from an event and organizes it into some sort of perception of the event. During this process, the brain turns sensory data into electrochemical signals; synapses are fired and somehow, this perception of the event is processed and stored in some way by the brain. However, I choose these words carefully because we all too often think of the brain like a computer that has inputs, processing, storage, and retrieval. But the brain is not a computer.

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences…But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand… It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

Source: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
Freaked out llama gif

If our brains were computers, then we’d be able to remember everything we’ve ever seen in perfect detail. As it were, most of us can’t and our neurons don’t simply store memories.

According to cognitive development theory and schema theory, new information creates or contributes to “schemas,” dynamic structures in the brain that help us process new information by referencing already-acquired information. Learn more about schema theory here.

For example, a human eats a mushroom for the first time that mushroom has ever been eaten, and they become ill shortly after. Because of the survival directive to seek patterns, the human’s brain will perhaps categorize that mushroom under the cognitive framework (the “schema”) that contains all “do not eat” food items. The visual stimulus of mushroom and the unpleasant sensation of being sick are now paired; the human was punished for the behavior of eating the mushroom.

The next time they encounter that mushroom, their brain will “flag” it as do not eat.

Scared cat gif

Interaction

Interaction is the second and fundamental way humans create information.

Human A tells Human B to not eat that specific mushroom. For Human B, this is new information; maybe Human B then tells Human A not to eat a few different (other) kinds of mushrooms. Human A and Human B discuss how their trust in mushrooms is truly broken and they’ll be more careful when eating mushrooms in the future. Human A and Human B have just created an information node in their brains about mushrooms and how maybe newly discovered ones might be dangerous based on past experience. “Mushrooms might be dangerous” is new information.

Observation

Observation is the third and fundamental way humans create information.

A human observes Fox A and Fox B in the forest. Fox A eats an aforementioned mushroom. Fox A starts talking to Fox B. Fox B does not understand, so Fox A urges it to eat a mushroom. Fox B eats the mushroom, gains the ability to speak, and then they have a conversation.

This absurd scenario allows the human to learn that the mushroom also seems to give foxes the ability to speak.

"What did the fox say" gif

TLDR; how is information created?

The things we experience through direct experience, interaction, or observation become information with which we navigate future experiences.

One good example of how new information is created is a scientific theory. Here’s a recap on scientific theory:

Scientists make progress by using the scientific method, a process of checking conclusions against nature. After observing something, a scientist tries to explain what has been seen.

The explanation is called a hypothesis. There is always at least one alternative hypothesis.

A part of nature is tested in a “controlled experiment” to see if the explanation matches reality. A controlled experiment is one in which all treatments are identical except that some are exposed to the hypothetical cause and some are not. Any differences in the way the treatments behave is attributed to the presence and lack of the cause.

If the results of the experiment are consistent with the hypothesis, there is evidence to support the hypothesis. If the two do not match, the scientist seeks an alternative explanation and redesigns the experiment.

When enough evidence accumulates, the understanding of this natural phenomenon is considered a scientific theory. A scientific theory persists until additional evidence causes it to be revised.

Nature’s reality is always the final judge of a scientific theory.

Source: http://jan.ucc.nau.edu/gaud/bio372/class/behavior/sciproc.htm

Why is information created?

While this is a valid question, I was hesitant to discuss it due to the complexities involved. 

For a person, just like any other human behavior, the act of information creation can be intentional or unintentional; in other words, conscious vs. subconscious. According to self-determination theory, humans act due to intrinsic or extrinsic motivations that do or do not fulfill three universal psychological needs: autonomy, relatedness, and competence. Learn more about that here and the difference between Maslow’s hierarchy and self-determination theory.

While the actions of one information creator may be explained in these relatively simple terms, once more than one person is involved, these terms may be overly simplistic. They certainly don’t fully explain why larger entities – organizations, institutions, entire governments – create information. In those cases, there may be a much more complex interplay of intrinsic and extrinsic motivations at play, such as political or financial factors.

Who creates information?

We live in the “information age.” To get all metaphysical about it, some even call our digital environment an infosphere, which is akin to our physical reality, the biosphere.

In many respects, we are not standalone entities, but rather interconnected informational organisms or inforgs, sharing with biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. This is the informational environment constituted by all informational processes, services, and entities, thus including informational agents as well as their properties, interactions, and mutual relations.

Source: Floridi, Luciano. Information: A Very Short Introduction (Very Short Introductions) (p. 9). OUP Oxford. Kindle Edition.

Simply, there is a lot of information floating around these days and we’re all steeped in it. So who’s creating all this stuff?

Any person, group, organization, or country can create information.

Sometimes they claim authorship, but sometimes they don’t. Sometimes the information is presented with authority (say, in an academic journal or media outlet), sometimes without authority, or sometimes it isn’t presented at all. Sometimes we don’t know when information is created, and we might never get access to it.

Anyone can create information?

Shocked kitten gif

Yeah… so about that.

Revisiting the definition of information with a caveat: data + meaning = information. If the underlying data is true/accurate, well-formed, and meaningful, then it’s information. As you can imagine, creating new information is actually really hard because it has to fulfill these conditions. But…

  • What if something is wrong with data or the meaning ascribed to the data?
  • What if there’s some false data?
  • Or if the data isn’t well-formed?
  • Or it’s not meaningful? Or has the wrong meaning? Or doesn’t have any meaning?

Then we have information disorder on our hands. Sounds scary, right?

World War Z gif

Even though we don’t have any zombies climbing walls, the effects of information disorder are scary and there’s never been a more poignant time to talk about information disorder.

What is information disorder?

It’s nothing new, let’s be clear. Between smear campaigns between politicians since the dawn of, uh, politics, to anytime anyone has spread an untrue rumor, we’ve been dealing with information disorder.

According to the Council of Europe’s Information Disorder Report of November 2017, which attempts to “examine information disorder and its related challenges,” there are three types of information disorder:

  • Disinformation
  • Misinformation
  • Malinformation

The introduction to the 2017 report argues that “while the historical impact of rumours and fabricated content have been well documented… contemporary social technology means that we are witnessing something new: information pollution at a global scale; a complex web of motivations for creating, disseminating and consuming these ‘polluted’ messages; a myriad of content types and techniques for amplifying content; innumerable platforms hosting and reproducing this content; and breakneck speeds of communication between trusted peers.”

A summary of this quote and a few additional notes:

  • Information disorder is created for a number of different reasons (outside the scope of this piece, read the Council of Europe’s report if you wish to learn more)
  • Information disorder can be created by official or unofficial actors
  • Information is typically packaged into the form of “messages,” which are then transmitted across complex networks at great speeds (more to come in future posts)
  • This is happening at a greater scale and rate than we’ve ever seen before

Misinformation

Misinformation is false or inaccurate information that is created unintentionally.

I argue that misinformation is information that contains a) incorrect or inaccurate data, or b) not well-formed data, or c) data that either has no meaning, over assumed meaning or is assigned the wrong meaning.

While I can’t quantify this, my gut is that a substantial amount of what starts out as information becomes misinformation due to new data or a new way of observing, collecting, or deriving meaning from the data. We think we know something because we interpret some data we found and use it to create a theory about how something works. We think it’s right. In light of new data, 10 years later, it turns out we were wrong and our information becomes false or inaccurate – now it is misinformation. The scientific process has built-in structures that help account for this. Additional research is built on what came before, and things have to be confirmed and reconfirmed over the course of future research. The self-referential and repetitious nature of this process ideally helps reinforce the information that is more likely to be true, and debunk information that is more likely to be false.

Old wives’ tales are a good example of misinformation, if we assume that at some point somebody thought they were true.

(I was told all of these at some point or another as a child.)

When this information was created, the creator thought it was true. Like I said, it happens, and frequently.

Disinformation

Disinformation is false or inaccurate information that is created with the intent to cause harm.

A simple example of this would be a false rumor.

Mean Girls gif

More serious examples include:

The harm inflicted by misinformation can be anywhere from mild to catastrophic; short-term, to long-term.

Malinformation

Malinformation sounds like what it is. It is information that may be true or accurate and is strategically used to inflict harm on a person, group, business, or country. While the 2017 report classifies this as a type of information disorder and then doesn’t dive too deep into any type of analysis on this category, I interpret malinformation more so as just weaponized information.

Examples:

  • Leaking evidence about a political opponent’s extra-marital affairs to a media outlet
  • Disclosing the movement of military units to enemy intelligence

TLDR; what is information disorder?

Types of Information Disorder
Credit: Claire Wardle & Hossein Derakshan, 2017, Link to Creative Commons License
7 Common Forms of Information Disorder
Credit: Claire Wardle & Hossein Derakshan, 2017, Link to Creative Commons License

Resources

What is Information?

“What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.”

Richard Dawkins
The Information Galaxy Illustration
The Information Galaxy (or a squiggly doodle, depending on your perspective)

Introduction

What is information, and how does it get around?

This post started as an excuse to explore this high-level question and gain a more comprehensive understanding of the expansive concept we call information.

Over the course of writing this post, I went down many rabbit holes, emerging some hours later, dazed and confused but full of wonder. Perspectives on this question ricochet from the most abstract analyses of the philosophy of information to the most mathematical and scientific studies in the bowels of information theory or science. The first part of this question evokes strong responses from the camps of philosophy, metaphysics, mathematics, physics, biology, computer science, and art. “What is information?” simultaneously unites and divides all those who seek answers or to answer it.

It was a delightful topic to write about, what with all those opinions bouncing around at all these intersections. The process was full of opportunities to challenge my assumptions and expand my understanding of the world.

The intent of this post is to communicate the breadth and depth implied by this question. And hopefully, to instill a bit of wonder about the world.

What is Information?

Simple question, right?

Laughing Minion Gif

At least that’s what I first thought.

To start exploring this question, I’ll cover how the concept of information has been acknowledged by science and information theory. Then I’ll provide a brief overview of information theory to a limited extent, and finally, present some parting thoughts on the question.

Why cover science and then information theory?

Increasingly, the physicists and the information theorists are one and the same. The bit is a fundamental particle of a different sort: not just tiny but abstract—a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

Gleick, James. The Information: A History, a Theory, a Flood (pp. 9-10). Knopf Doubleday Publishing Group. Kindle Edition.

Science!

Doctor Who Physics Gif

In 1929, Leo Szilard addressed a paradox that had plagued physicists for half a century. He posited that information could be dispensed in small units (bits) that could counterbalance entropy (simply put, entropy is disorder). In his thought experiment, a new and proverbial solar system of thought was created. (Learn more about the paradox and his thought experiment here.)

In sum – information brings organization and order; it is the counterpart to disorder.

Information, in its connotation in physics, is a measure of order—a universal measure applicable to any structure, any system. It quantifies the instructions that are needed to produce a certain organization. This sense of the word is not too far from the one it once had in old Latin. Informare meant to “form,” to “shape,” to “organize.”

This information flow, not energy per se, is the prime mover of life—that molecular information flowing in circles brings forth the organization we call “organism” and maintains it against the ever-present disorganizing pressures in the physics universe. So viewed, the information circle becomes the unit of life.

Loewenstein, Werner R.. The Touchstone of Life (pp. xv-xvi). Oxford University Press. Kindle Edition.
Great Circle of Life, Lion King Gif

How much information is needed to counterbalance disorder?

The more disorder, the more information is needed to bring order. Any element/system that can be reproduced in many (equivalent) ways is perceived as disorderly and requires more information to bring about order.

Over time, systems shed their information (order) and maximize their entropy (disorder). This phenomenon, while rooted in physics, is known in software engineering; it’s called software entropy. Unless maintainers and builders actively work against the buildup of disorder, as the system is modified and added to over time, the disorder will increase. How can we prevent this? Some folks suggest fixing metaphorical broken windows as an effective preventative measure.

So that’s how information, as a concept, came into being, at least in physics.

Science, out.

Bill Nye Science Gif

Information Theory

Raccoon with an Abacus Gif
Maths

This section will, at an extremely high level, provide a brief history of the concept of information in the realm of information theory, and then outline the basics of information theory.

The Concept of “Information”

This field had a promising start in the 1920s when telegraphs were booming (not literally, of course), and communications infrastructure was rapidly expanding. Due to the ever-increasing importance of reliable transcontinental telegraphs, many aspects of that communication system came under theoretical and applied scrutiny during the telegraph boom. When there was no established field of information theory, these topics were studied at a messy/amazing/unsustainable intersection of engineering, mathematics, and “communication systems.”

Here’s an abridged history:

  • 1924: Harry Nyquist provides a distinction between the actual content of the signal and the information carried within the message. He begins to discuss how a system could be optimized for the transmission of intelligence (his words, not mine). As a part of that, he realized that communication channels have a certain transmission maximum. He didn’t talk about information – just intelligence.
  • 1928: R.V.L. Hartley builds upon Nyquists’ ideas by removing some of the more interpretive/subjective elements (namely, the concern over meaning). He develops mathematical proofs for measuring the flow of intelligence. In his own words, he hoped ” to accomplish…. a quantitative measure whereby the capacities of various systems to transmit information may be compared.” Source.
  • 1948: Claude Shannon, widely regarded as the father of information theory, cites Nyquist’s and Hartley’s papers in his groundbreaking paper, Mathematical Theory of Communication. More below. Note: Shannon made the jump from intelligence to information.

There are two parts to Shannon’s work:

  • Modeling the conceptualization of information and information sources and working from these models:
Diagram of general communication system, per Claude Shannon's 1948 paper
  • Developing theories on the sending of information across the channel, the limits of the amount of information, and noise.

I touch on Shannon’s work a bit more in a future section of this post, but if you want to know all about it, I recommend reading Shannon’s actual paper, this useful summary of his life from the Scientific American, or just giving him a good ole’ Google.

Shannon’s work allowed all communication systems – radio, television, telegraph, etc. – to be unified under one model with common characteristics and problems. Although “Shannon’s model does not cover all aspects of a communication system… in order to develop a precise and useful theory of information, the scope of the theory has [sic] to be restricted” (Source).

Information theory was born.

Lion King Simba birth gif
Simformation Theory is born!

The (Very) Basics of Information Theory

Information theory is a mathematical representation of the conditions and parameters affecting the transmission and processing of information (Encyclopaedia Brittanica).

Information theory deals with three basic concepts:

(a) the measure of source information (the rate at which the source generates the information),

(b) the information capacity of a channel (the maximum rate at which reliable transmission of information is possible over a given channel with an arbitrarily small error), and

(c) coding (a scheme for efficient utilization of the channel capacity for information transfer). These three concepts are tied together through a series of theorems that form the basis of information theory summarized as follows:

If the rate of information from a message-producing source does not exceed the capacity of the communication channel under consideration, then there exists a coding technique such that the information can be sent over the channel with an arbitrarily small frequency of errors, despite the presence of undesirable noise.

Information Theory, Coding and Cryptography by Arijit Saha, Nilotpal Manna, Mandal
Racoon playing a water sprinkler gif
Trying to hold onto a sense of what that meeeeans

That’s dense, so let’s break that down.

Information system illustration

A) Measure of source information

  • “The rate at which the source generates the information.”
  • This is how many envelopes the source produces or how much information is in the message.

B) Information capacity of a channel

  • “The maximum rate at which reliable transmission of information is possible over a given channel with an arbitrarily small error.”
  • How many envelopes can fit into that channel, the speed of the envelope moving through the channel, and the tolerance for small errors in envelope sealing (just an example of an error type)

C) Coding

  • “A scheme for efficiently using the communication channel’s capacity.”
  • The envelope itself, the methodology of fitting the message in the envelope, the way information is encoded as a message, etc.

Just a bit more about Shannon’s paper

Shannon also introduced two other concepts about information in the context of a communication system:

  1. Information is uncertainty. “More specifically, if a piece of information we are interested in is deterministic, then it has no value at all because it is already known with no uncertainty. From this point of view…. the continuous transmission of a still picture on a television broadcast channel is superfluous. Consequently, an information source is naturally modeled as a random variable or a random process, and probability is employed to develop the theory of information” (Source).
  2. Information to be transmitted is digital. “This means that the information source should first be converted into a stream of 0’s and 1’s called bits,and the remaining task is to deliver these bits to the receiver correctly with no reference to their actual meaning” (Source).

And he proved two theorems, which are highly related to the a), b), and c) above.

1. The source coding theorem introduces entropy as the fundamental measure of information which characterizes the minimum rate of a source code representing an information source essentially free of error. The source coding theorem is the theoretical basis for lossless data compression.

2. The second theorem, called the channel coding theorem, concerns communication through a noisy channel. It was shown that associated with every noisy channel is a parameter, called the capacity, which is strictly positive except for very special channels, such that information can be communicated reliably through the channel as long as the information rate is less than the capacity. These two theorems, which give fundamental limits in point-to-point communication, are the two most important results in information theory.

Information Theory and Network Coding by Raymond W. Yeung

To learn more, I highly recommend picking up the book cited above.

So…

"Wait, what was the question" gif

What is information?

We’ve covered how “information” was discovered as a concept in both science and information theory, how information theory was established, and what concepts information theory touches. But my original question still stands.

Well. Shannon was certainly cautious about answering this question. Here’s what he had to say about it in the late 1940s.

The word ‘information’ has been given different meanings by various writers in the general field of information theory. It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition.

It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field.

The Lattice Theory of Information by C. Shannon

It’s an eloquent way of saying:

Nope Nope Nope Gif

But hey – that was in 1940-something. Things have probably changed, right?

Work on the concept of information is still at that lamentable stage when disagreement affects even the way in which the problems themselves are provisionally phrased and framed.

Information: A Very Short Introduction, Luciano Floridi, 2010

And:

What is information? The question has received many answers in different fields. Unsurprisingly, several surveys do not even converge on a single, unified definition of information (see for example Braman [1989], Losee [1997], Machlup and Mansfield [1983], Debons and Cameron [1975], Larson and Debons [1983]).

Source: https://plato.stanford.edu/entries/information-semantic/#2
Disgruntled Tangled Frog Gif
Unamused

It’s time for a bit of a leap of faith.

I’m going to make an assumption…

…That whoever is reading this wants a f**king answer to this question, if you’ve made it this far.

We’re going to branch down from “information” to “data”, and explain it from the perspective of the “semantic” philosophical theory. I don’t really know what that means either, but assume that lots of people agree and disagree with the direction I’m taking this, and it’s not a black & white answer.

Wayfinding map through semantic information theory - don't worry, we only go two levels into it
A wayfinding map. Source.

The General Definition of Information (GDI)

This is a controversial and highly subjective answer to a seemingly simple question.

The General Definition of Information (GDI) in terms of data + meaning. Various fields have adopted the GDI, generally, those that consider data and information to be more concrete entities, e.g., information science.

The General Definition of Information (GDI):
x is an instance of information, understood as semantic content, if and only if:

(GDI.1) x consists of one or more data;

(GDI.2) the data in x are well-formed;

(GDI.3) the well-formed data in x are meaningful.

Source: https://plato.stanford.edu/entries/information-semantic/#1

Let’s break down each of the words in italics.

Data

This gets weird and metaphysical, real fast.

Again, sticking to the GDI’s accepted definition a singular data point (a datum): a datum is a fact regarding some difference or lack of uniformity within some context.

For example, the top-level domain (TLD) of my website is .dev. True story. This is a datum (fact) about my website (context). There are many top-level domains, but even not knowing that fact, the existence of .dev suggests there may be non-.dev possibilities.

There’s loads more to the definition, but I’m not going to go into it here. If you want to learn more, this is the place to go.

Well-Formed

This means that the data are clustered together correctly, according to the rules (syntax) that govern the chosen system, code, or language. Syntax is what determines the form, construction, composition, or structuring of something.

For example, in a tree graph, a parent node will always appear above a child node. A child node will always appear below the parent node. This is syntax.

Parent node, child node illustration

Meaningful

The data adhere to the semantics (meaning) of a system.

In the graph, we understand that the child nodes are sub-elements of the parent node. That relationship is due to one or multiple shared characteristics, functions, or properties. This is the semantic structure of a tree graph.

Becoming Information

“We can see now that information is what our world runs on: the blood and the fuel, the vital principle. It pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. What English speakers call “computer science” Europeans have known as informatique , informatica , and Informatik.”

The Information: A History, a Theory, a Flood by James Gleick

Put one or more well-formed and meaningful datum together, and what have you got?

I can communicate something like this – a rudimentary diagram of the Domain Name System namespace! Information!

Namespace illustration
Bippity Boppity Boo Cinderella Gif

Final Thoughts

Hitchhiker's Guide to the Galaxy Gif - the meaning of life is 42
The meaning of life, per the Hitchhiker’s Guide to the Galaxy

The Information Lifecycle (The Circle of Life)

Once information (data + meaning) could be said to exist, people or systems need to gain access to it via communication systems. Communication systems shape and are shaped by the information lifecycle. But the communication system is just part of the lifecycle of information:

Information lifecycle illustration
The Information Lifecycle, modeled after a diagram in Floridi, Luciano. Information: A Very Short Introduction (Very Short Introductions) (p. 5). OUP Oxford. Kindle Edition.

The information lifecycle and the contents of this diagram deserves a post of its own. In essence, we’ve only really looked at a portion of this cycle.

The Information Galaxy

As I read and wrote more about this topic, the image I kept coming back to was one of a galaxy. While the metaphor to astronomy is, so to speak, quite vast; this was more focused on the mindset I adopted while exploring this topic.

We know things about the galaxy. We don’t know things about the galaxy. There are observed phenomena and rules created to explain them. And there are observed or unobserved phenomena that we haven’t discovered or been able to explain yet. This holds true for the concept of information. We know some things about information and how it gets around. And then we don’t know some things about information and how it gets around. There are unsolved puzzles and undiscovered frontiers.

When considering information, we encounter the far reaches of human knowledge and understanding. Information binds us together and creates boundaries that separate in equal measure.

One thing is for sure, there’s always more to explore.

Hitchhiker's Guide to the Galaxy dolphin gif

Resources

These resources are mostly all linked throughout this post, but some of these are more so additional reading for the curious or some of my favorite materials about the topic.

  • Information Theory and Network Coding 2nd Ed. by Raymond W. Yeung
  • Stanford Encylopedia of Philosophy, Information
  • Information Theory, Coding and Cryptography by Arijit Saha, Nilotpal Manna, Mandal
  • Information: A Theory, A History, and Flood James Gleick
  • Information: A Very Brief Introduction by Luciano Floridi
  • The Touchstone of Life, Werner R. Loewenstein

art and code (3/3)

Art and Code Illustration
original artwork

preface

  1. This is the last of three posts about art and code, specifically about the similarities in chronological flow/process. I recommend reading the first post and second post prior to this one.
  2. These are subjective views/opinions/not facts and are from the perspective of a novice programmer and visual artist.
  3. This topic deserves a much longer extrapolation and could easily become a book. These posts will be fairly concise.
  4. This preface appears at the beginning of each post in the series.
  5. I am passionate about this topic and believe there are far more similarities than differences in artistic and technical pursuits. I am, overall, at a loss as to why the two generally are held up in contrast to each other.

end

It’s over, or almost over. The thing will be done soon.

For me, this part of the art-making and coding process is the most nerve-wracking. This is when I have to wrap things up, make them final, and commit to (sometimes temporary) permanence. And once it’s done, I put it out there for folks to look at, use, wish there was more to, or find faults with.

But at the of the day (and the project), the bright side of not working on this thing anymore outweighs any of that.

This part of the process often contains the following actions or thoughts.

final touches

The final touches for both art and code usually involve tying up loose ends and cleaning up after the mess of creative flow. Because it can be a messy business. Below are examples of the finishing touches, all the way to actually being finished.

  • Linting | cleaning up the area and taking care of used materials. Tying up loose ends.
  • Unit tests (I mean, these should probably be written already) | preserving the work (spraying, coating), matting and framing. Making sure it can withstand at least some stressors.
  • Documentation | a description/title. Ensuring the work is comprehensible to others.
  • Making a PR or publishing the project | hanging it on a wall. Putting it out there.
  • Peer review | hanging it on a wall in a gallery that the whole world can walk into. Putting it out there.

time

With both art and code, one has to actively consider a number of factors through the lens of time. Most notably, time decay and environmental stressors.

Once the thing is created, unless the creator takes steps to ameliorate this, time decay inevitably sets in. Things, once created, capture techniques and technologies that exist or were popular at that point in time.

  • Languages, syntax of languages, tools, that version of x | colors, brush strokes, shapes, methods/schools

Here’s an architecture example. (Forgive me architects, this is not a perfectly 1:1 analogy and I don’t intend to communicate that these buildings are equivalent in their architectural significance.)

Alcázar of Segovia, Segovia, Spain. Source
Walt Disney Concert Hall, Los Angeles, CA. Source

They both need to be maintained. They both have different maintenance needs. They both have their charms. They are maybe not so charming to some people. Either way, they need to be taken care of. This is as it is with art and code.

Time decay involves environmental stressors. Examples:

  • If the art piece is exposed to the elements, its materials will degrade.
  • If the coding project is not kept up to date, security vulnerabilities crop up and as the whole world of technology keeps moving forward, things can start breaking.
  • The public uses it | the public sees it. Yikes. Wear and tear occurs. Users and viewers experience it in ways they perhaps were never intended to experience it.

reflection

This is my favorite part of the painful process.

It is an opportunity to reflect and effectively do a postmortem on one’s own process. This is when we notice areas for improvements and have a chance to learn from ourselves and the effort expended on the creation.

This is also a delicate part of the process, because it is far too easy to start comparing. The final product to the original idea, the final product to other people’s work, etc.

Ideally, we come out of the reflection period with a general sense of accomplishment. Here are some techniques I’ve used to reflect productively:

  • Pretend I am mentoring the 8-year old version of myself and openly self-dialogue about how things went (hey, not for everybody, but it works for me)
  • Timebox your reflection (e.g. 80 minutes)
  • Ask “what really worked for me?”
  • Ask “what really didn’t work for me?”
  • Ask “what will I bring forward into my next art or code effort that will help me enjoy the process more?”

I have baked this reflection phase into each of my coding projects. I list what I have learned in the README of each project I post on Github. I blog about my projects. It’s all part of documenting the things I learn, so I don’t have to learn them the extra-hard way (again). This is a powerful opportunity to collect data on oneself; data that can be leveraged to gain greater heights in the future.

feelings

In my experience, there are two big feelings at this part of the process.

Crushing perfectionism and self-doubt typically go hand and hand and typically are manifestations of imposter syndrome. Learn more about Imposter Syndrome from the APA.

Impatience – I’m done. When will it be over?

Feelings happen. They have has much power as I allow them to have over this part of the process.

the future

“To iterate or not to iterate?” – that is the question.

  • Do you want to do it again, only better?
  • Are there ways you could build off of this?

Too soon.

Maybe later.