Thinking by playing around

Richard Feynman’s Nobel Prize winning discoveries in quantum electrodynamics were partly inspired by his randomly observing a spinning dinner plate one day in the cafeteria. Paul Feyerabend said regarding science discovery, “The only principle that does not inhibit progress is: anything goes” (within relevant ethical constraints, of course).

Ideas can come from anywhere, including physical play. Various books can improve creative discovery skills, like George Pólya’s How to Solve It, Isaac Watts’ Improvement of the Mind, W. J. J. Gordon’s Synectics, and methodologies like mind mapping and C-K theory, to name a few. Many software products present themselves as shiny new tools promising help. However, we are not just disembodied minds interacting with a computer, but instead integrated beings with reasoning integrated with memories, life history, emotions and multisensory input and interaction with the world The tactile is certainly a key avenue of learning, discovering, understanding.

Fidget toys are popular. Different kinds of toys have different semiotics with respect to how they interplay with our imaginations. Like Legos. Structured, like Le Corbusier-style architecture, or multidimensional arrays or tensors, or the snapping together of many software components with well-defined interfaces, with regular scaling from the one to many. Or to take a much older example, Tinkertoys—the analogy of the graph, interconnectedness, semi-structured but composable, like DNA or protein chains, or complex interrelated biological processes, or neuronal connections, or the wild variety within order of human language.

As creative workers, we seek ideas from any and every place to help us in what we do. The tactile, the physical, is a vital place to look.

How to Organize Technical Research?

 

64 million scientific papers have been published since 1996 [1].

Assuming you can actually find the information you want in the first place—how can you organize your findings to be able to recall and use them later?

It’s not a trifling question. Discoveries often come from uniting different obscure pieces of information in a new way, possibly from very disparate sources.

Many software tools are used today for notetaking and organizing information, including simple text files and folders, Evernote, GitHub, wikis, Miro, mymind, Synthical and Notion—to name a diverse few.

AI tools can help, though they can’t always recall correctly and get it right, and their ability to find connections between ideas is elementary. But they are getting better [2,3].

One perspective was presented by Jared O’Neal of Argonne National Laboratory, from the standpoint of laboratory notebooks used by teams of experimental scientists [4]. His experience was that as problems become more complex and larger, researchers must invent new tools and processes to cope with the complexity—thus “reinventing the lab notebook.”

While acknowledging the value of paper notebooks, he found electronic methods essential because of distributed teammates. In his view many streams of notes are probably necessary, using tools such as GitLab and Jupyter notebooks. Crucial is the actual discipline and methodology of notetaking, for example a hierarchical organization of notes (separating high-level overview and low-level details) that are carefully written to be understandable to others.

A totally different case is the research methodology of 19th century scientist Michael Faraday. He is not to be taken lightly, being called by some “the best experimentalist in the history of science” (and so, perhaps, even compared to today) [5].

A fascinating paper [6] documents Faraday’s development of “a highly structured set of retrieval strategies as dynamic aids during his scientific research.” He recorded a staggering 30,000 experiments over his lifetime. He used 12 different kinds of record-keeping media, including lab notebooks proper, idea books, loose slips, retrieval sheets and work sheets. Often he would combine ideas from different slips of paper to organize his discoveries. Notably, his process to some degree varied over his lifetime.

Certain motifs emerge from these examples: the value of well-organized notes as memory aids; the need to thoughtfully innovate one’s notetaking methods to find what works best; the freedom to use multiple media, not restricted to a single notetaking tool or format.

Do you have a favorite method for organizing your research? If so, please share in the comments below.

References

[1] How Many Journal Articles Have Been Published? https://publishingstate.com/how-many-journal-articles-have-been-published/2023/

[2] “Multimodal prompting with a 44-minute movie | Gemini 1.5 Pro Demo,” https://www.youtube.com/watch?v=wa0MT8OwHuk

[3] Geoffrey Hinton, “CBMM10 Panel: Research on Intelligence in the Age of AI,” https://www.youtube.com/watch?v=Gg-w_n9NJIE&t=4706s

[4] Jared O’Neal, “Lab Notebooks For Computational Mathematics, Sciences, Engineering: One Ex-experimentalist’s Perspective,” Dec. 14, 2022, https://www.exascaleproject.org/event/labnotebooks/

[5] “Michael Faraday,” https://dlab.epfl.ch/wikispeedia/wpcd/wp/m/Michael_Faraday.htm

[6] Tweney, R.D. and Ayala, C.D., 2015. Memory and the construction of scientific meaning: Michael Faraday’s use of notebooks and records. Memory Studies8(4), pp.422-439. https://www.researchgate.net/profile/Ryan-Tweney/publication/279216243_Memory_and_the_construction_of_scientific_meaning_Michael_Faraday’s_use_of_notebooks_and_records/links/5783aac708ae3f355b4a1ca5/Memory-and-the-construction-of-scientific-meaning-Michael-Faradays-use-of-notebooks-and-records.pdf

Partitioning complexity

This post looks at how to partition complexity between definitions and theorems, and why it’s useful to be able to partition things more than one way.

Quadratic equations

Imagine the following dialog in an algebra class.

“Quadratic equations always have two roots.”

“But what about (x − 5)² = 0. That just has one root, x = 5.”

“Well, the 5 counts twice.”

Bézout’s theorem

Here’s a more advanced variation on the same theme.

“A curve of degree m and a curve of degree n intersect in mn places. That’s Bézout’s theorem.”

“What about the parabola y = (x − 5)²  and the line y = 0. They intersect at one point, not two points.”

“The point of intersection has multiplicity two.”

“That sounds familiar. I think we talked about that before.”

“What about the parabola y = x² + 1 and the line y = 0. They don’t intersect at all.”

“You have to look at complex numbers. They intersect at x = i and x = −i.”

“Oh, OK. But what about the line y = 5 and the line y = 6. They don’t intersect, even for complex numbers.”

“They intersect at the point at infinity.”

In order to make the statement of Bézout’s theorem simple you have to establish a context that depends on complex definitions. Technically, you have to work in complex projective space.

Definitions and theorems

Michael Spivak says in the preface to his book Calculus on Manifolds

… the proof of [Stokes’] theorem is … an utter triviality. On the other hand, even the statement of this triviality cannot be understood without a horde of definitions … There are good reasons why the theorems should all be easy and the definitions hard.

There are good reasons, for the mathematician, to make the theorems easy and the definitions hard. But for students, there may be good reasons to do the opposite.

Here math faces a tension that programming languages (and spoken languages) face: how to strike a balance between the needs of novices and the needs of experts.

In my opinion, math should be taught bottom-up, starting with simple definitions and hard theorems, valuing transparency over elegance. Then, motivated by the complication of doing things the naive way, you go back and say “In light of what we now know, let’s go back and define things differently.”

It’s tempting to think you can save a lot of time by starting with the abstract final form of a theory rather than working up to it. While that’s logically true, it’s not pedagogically true. A few people with an unusually high abstraction tolerance can learn this way, accepting definitions without motivation or examples, but not many. And the people who do learn this way may have a hard time applying what they learn.

Applications

Application requires moving up and down levels of abstraction, generalizing and particularizing. And particularizing is harder than it sounds. This lesson was etched into my brain by an incident I relate here. Generalization can be formulaic, but recognizing specific instances of more general patterns often requires a flash of insight.

Spivak said there are good reasons why the theorems should all be easy and the definitions hard. But I’d add there are also good reasons to remember how things were formulated with hard theorems and easy definitions.

It’s good, for example, to understand analysis at a high level as in Spivak’s book, with all the machinery of differential forms etc. and also be able to swoop down and grind out a problem like a calculus student.

Going back to Bézout’s theorem, suppose you need to find real solutions a system of equations that amounts to finding where a quadratic and cubic curve intersect. You have a concrete problem, then you move up to the abstract setting of Bézout’s theorem learn that there are at most six solutions. Then you go back down to the real world (literally, as in real numbers) and find two solutions. Are there any more solutions that you’ve overlooked? You zoom back up to the abstract world of Bézout’s theorem, and find four more by considering multiplicities, infinities, and complex solutions. Then you go back down to the real world, satisfied that you’ve found all the real solutions.

A pure mathematician might climb a mountain of abstraction and spend the rest of his career there, but applied mathematicians have to go up and down the mountain routinely.

Theory, practice, and integration

Theory and practice are both important. As Donald Knuth put it,

If you find that you’re spending almost all your time on theory, start turning some attention to practical things; it will improve your theories. If you find that you’re spending almost all your time on practice, start turning some attention to theoretical things; it will improve your practice.

However, there’s an implicit third factor: integration.

We can have theoretical knowledge and practical experience, but keep them isolated in different parts of our brain. It’s possible to do well in a probability class, and play poker well, but not be able to think about probability while playing poker, to be no better at poker for having taken probability.

It may not be immediately obvious that someone excels at integration. It comes out in off-the-cuff remarks, little comments that show someone can quickly apply abstract theory to a concrete problem.

It’s hard to teach integration. A course can have lectures and labs, but that doesn’t mean students will fully connect the two in their minds. Maybe the connection forms later after some experience and some contemplation.

Engineering attitude

Carver Mead on engineering:

Engineering isn’t something you study and learny, and memorize, and know where to look up. Engineering is understanding things all the way to the bottom, no matter what field they are called, and being able use that to build stuff and make it work.

I edited the quote slightly. Mead was speaking in the past tense about the attitude that Royal Sorensen brought to Cal Tech. I thought the stand-alone quote would be easier to read in the present tense.

Source: Carver Mead: the thing that’s going to be the most important 100 years from now, around 22:30.

Where has all the productivity gone?

Balaji Srinivasan asks in a Twitter thread why we’re not far more productive given the technology available. Here I collect the five possible explanations he mentions.

  1. The Great Distraction.
    All the productivity we gained has been frittered away on equal-and-opposite distractions like social media, games, etc.
  2. The Great Dissipation.
    The productivity has been dissipated on things like forms, compliance, process, etc.
  3. The Great Divergence.
    The productivity is here, it’s just only harnessed by the indistractable few.
  4. The Great Dumbness.
    The productivity is here, we’ve just made dumb decisions in the West while others have harnessed it.

If I had to choose one of the five, I’d lean toward The Great Dissipation, inventing new tasks to absorb new capacity. This is what happened with the introduction of household appliances. Instead of spending less time doing laundry, for example, we do laundry more often.

Maybe we’re seeing that technological bottlenecks were not as important as we thought.

For example, it’s easier to write a novel using Microsoft Word than using a manual typewriter, but not that much easier. MS Word makes the physical work easier, but most of the effort is mental. (And while moving from Smith Corona 1950 to Word 95 is a big improvement, moving from Word 95 to Word 365 isn’t.)

Technology calls our bluff. Improvements in technology show us that technology wasn’t the obstacle that we thought it was.

 

Just-in-case revisited

Just-in-time learning means learning something just when you need it. The alternative is just-in-case, learning something in case you need it. I discussed this in an earlier post, and today I’d like to add a little to that discussion.

There are some things you need to know (or at least be familiar with) before you have a chance to use them. Here’s a variation on that idea: some things you need to have practiced before you need them in order to overcome an effort barrier.

Suppose you tell yourself that you’ll learn to use Photoshop or GIMP when you need to. Then you need to edit a photo. Faced with the prospect of learning either of these software packages, you might decide that the photo in question looks good enough after all.

There are things that in principle you could learn just-in-time, though in practice this is not psychologically feasible. The mental “activation energy” is too high. Some things you need to practice before hand, not because you couldn’t look them up when needed, but because they would be too daunting to learn when needed.

Related post: Bicycle skills

Why a little knowledge is a dangerous thing

Alexander Pope famously said

A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.

I’ve been thinking lately about why a little knowledge is often a dangerous thing, and here’s what I’ve come to.

Any complex system has many causes acting on it. Some of these are going to be more legible than others. Here I’m using “legible” in a way similar to how James Scott uses the term. As Venkatesh Rao summarizes it,

A system is legible if it is comprehensible to a calculative-rational observer looking to optimize the system from the point of view of narrow utilitarian concerns and eliminate other phenomenology. It is illegible if it serves many functions and purposes in complex ways, such that no single participant can easily comprehend the whole. The terms were coined by James Scott in Seeing Like a State.

People who have a little knowledge of a subject are only aware of some of the major causes that are acting, and probably they are aware of the most legible causes. They have an unbalanced view because they are aware of the forces pushing in one direction but not aware of other forces pushing in other directions.

A naive view may be unaware of a pair of causes in tension, and may thus have a somewhat balanced perspective. And an expert may be aware of both causes. But someone who knows about one cause but not yet about the other is unbalanced.

Examples

When I first started working at MD Anderson Cancer Center, I read a book on cancer called One Renegade Cell. After reading the first few chapters, I wondered why we’re not all dead. It’s easy to see how cancer can develop from one bad cell division and kill you a few weeks later. It’s not as easy to understand why that doesn’t usually happen. The spreading of cancer is more legible than natural defenses against cancer.

I was recently on the phone with a client who had learned enough about data deidentification to become worried. I explained that there were also reasons to not be as worried, but that they’re more complicated, less legible.

What to do

Theories are naturally biased toward causes that are amenable to theory, toward legible causes. Practical experience and empirical data tend to balance out theory by providing some insight into less legible causes.

A little knowledge is dangerous not so much because it is partial but because it is biased; it’s often partial in a particular way, such as theory lacking experience. If you spiral in on knowledge in a more balanced manner, with a combination of theory and experience, you might not be as dangerous along the way.

When theory and reality differ, the fault lies in the theory. More on that in my next post. Theory necessarily leaves out complications, and that’s what makes it useful. The art is knowing which complications can be safely ignored under which circumstances.

Related posts

Make boring work harder

I was searching for something this morning and ran across several pages where someone blogged about software they wrote to help write their dissertations. It occurred to me that this is a pattern: I’ve seen a lot of writing tools that came out of someone writing a dissertation or some other book.

The blog posts leave the impression that the tools required more time to develop than they would save. This suggests that developing the tools was a form of moral compensation, procrastinating by working on something that feels like it’s making a contribution to what you ought to be doing.

Even so, developing the tools may have been a good idea. As with many things in life, it makes more sense when you ask “Compared to what“? If the realistic alternative to futzing around with scripts was to write another chapter of the dissertation, then developing the tools was not the best use of time, assuming they don’t actually save more time than they require.

But if the realistic alternative was binge watching some TV series, then writing the tools may have been a very good use of time. Any time the tools save is profit if the time that went into developing them would otherwise have been wasted.

Software developers are often criticized for developing tools rather than directly developing the code they’re paid to write. Sometimes these tools really are a good investment. But even when they’re not, they may be better than the realistic alternative. They may take time away from Facebook rather than time away from writing production code.

Another advantage to tool building, aside from getting some benefit from time that otherwise would have been wasted, is that it builds momentum. If you can’t bring yourself to face the dissertation, but you can bring yourself to write a script for writing your dissertation, you might feel more like facing the dissertation afterward.

Related post: Automate to save mental energy, not time

A different kind of computational survival

Abandoned shopping mall

Last year I wrote a post about being a computational survivalist, someone able to get their work done with just basic command line tools when necessary. This post will be a different take on the same theme.

I just got a laptop from an extremely security-conscious client. I assume it runs Windows 10 and that I will not be able to install any software without an act of Congress. I don’t know yet because I haven’t booted it up. And in fact I cannot boot it up yet because I don’t have the badge yet to unlock it.

If being able to work with just default command line tools is like wilderness survival, being able to work with only consumer software is like urban survival, like trying to live in an abandoned shopping mall.

There must be some scientific software on the laptop. I imagine I may have to re-implement from scratch some tools that aren’t installed. I’ve been in that situation before.

One time I was an expert witness on a legal case and had to review the other side’s software. I could only work on their laptop, from their attorney’s office, with no network connection and no phone. I could request some software to be installed before I arrived, so I asked them to put Python on the laptop. I could bring books into the room with the laptop, so I brought the Python Cookbook with me.

If you don’t have grep, sed, or awk, but you do have Perl, you and roll your own version of the utilities in a few lines of code. For example, see Perl as a better grep.

I always use LaTeX for writing math, but the equation editor in Microsoft Word supports a large amount of LaTeX syntax. Or at least it did when I last tried it a decade ago.

The Windows command line has more Unix-like utilities than you might imagine. Several times I’ve typed a Unix command at the Windows cmd.exe prompt, thought “Oh wait. I’m on Windows, so that won’t work,” and the command works. The biggest difference between the Windows and Linux command lines is not the utilities per se. You can install many of the utilities, say through GOW.

The biggest difference in command lines is that on Windows, each utility parses its own arguments, whereas on Linux the shell parses the arguments first and passes the result to the utilities. So, for example, passing multiple files to a utility may or may not work on Windows, depending on the capability of the utility. On Linux, this just works because it is the shell itself rather than the utilities launched from the shell that orchestrates the workflow.

I expect this new project will be very interesting, and worth putting up with the minor annoyances of not having my preferred tools at my fingertips. And maybe it won’t be as hard as I imagine to request new software. If not, it can be fun to explore workarounds.

It’s sort of a guilty pleasure to find a way to get by without the right tool for the job. It would be a waste of time under normal circumstances, and not something the client should be billed for, but you can hack with a clear conscience when you’re forced into doing so.