Tuesday, December 30, 2008

Finished door panel prototype


Actually I spent most of the day working on a paper with Andy but there's no cool picture for that. Afterwards I finished the door panel prototype. I think they look pretty good but they were a real pain in the ass. I'd like to do it all over the house but I think I'll wait until I have access to a large mill.

Monday, December 29, 2008

Old media to new media conversion rate? Negligible


I was curious as to what the conversion rate might be from old media such as Science Magazine to new media like this blog so I tracked the hits of this blog during the publication of the Science article about me last week. The answer? Anemic. On the day of the release, only 90 visits to this blog and most of those came from HackerNews because my friend Jim posted it there. Granted, that's a lot more than the background of near zero, but compared to times when my website Mine-Control has been mentioned in obscure blogs, it's nothing. For example, an obscure Spanish art/video site once linked to Mine-Contol and I ended up with a $1000 monthly bill on bandwidth after tens of thousands of hits. A single tag on a social bookmarking site like digg usually generates thousands of hits. So, despite the fact that lots of people read Science, the conversion rate is apparently low. Of course, this is a single biased sample and it might just be that nobody cared enough about that article, but I suspect that it wasn't that much different in interest than any of the much higher converting blog entries I've been on the receiving end of before. So, thinking of advertising in old media and hoping for a lot of resulting web hits? -- maybe not a great idea.

Thursday, December 25, 2008

More back seat wall progess


Christmas day progress on the back wall. I built a temporary mold with bricks and leveled out the top surface in preparation for the seat course.

Wednesday, December 24, 2008

Back seat wall progress


Today I made good progress on the back seat wall on a perfect 60 degree day. It isn't doesn't require any creative thought as the pattern is regular over the whole length so it's an almost meditative task that requires nearly zero brain power.

Tuesday, December 23, 2008

Diffusively coupled oscillators

Having previously seen the effects that oscillators near 180 degree phase-boundaries run faster, I conducted the experiment of isolating two oscillators and varying the diffusive constant between them (thanks John). This first graph shows the time trace of the two diffusively-coupled oscillators started 180 degrees out of phase. Note how both amplitude and wavelength are different early-on compared to later when the two have synchronized. The next graph shows the peak frequency of the first phase (before phase lock) as a function of diffusion.


I know this is a well-known phenomena and is the basis of reaction diffusion systems so I started hunting for references. First google hit was: PRL 96 054101 - Daido and Nakanishi - Diffusion Induced Inhomogeneity in Globally Coupled Oscillators. They show various facets of a similar system without regard to spatial dynamics (like this experiment). They refernce Erik's father's (A T Winfree) book The Geometry of Biological Time which looks like a must read. They also reference an interesting sounding book: Synchronization - A Universal Concept in Nonlinear Sciences which looks like another must read and the UT library has an online copy!

Monday, December 22, 2008

Feature size varies to the 1/2 power in diffusive latches


I extended yesterday's bi-stable diffusive latach over a larger diffusive range. It was roughly linear over a 0 to 1 domain. It is proportional to the 1/2 power over a larger range. I took Edward's advice and plotted it with error bars and just ignored the deviation information. Here is the sampling over 30 trials with differing random small initial conditions with 1 SD error bars.

Sunday, December 21, 2008

Latch with diffusion


Feature size changes with diffusion ... more diffusion, bigger features.


Today I played around with how the diffusion coefficient effects the formation of pattern in the simple latch case. This is an array of bi-stable switches with uninitialized starting conditions (i.e. a little bit of noise). The feature size varies directly with the diffusion. The graph shows that the mean feature size (blue, multiple trials) rises fairly linearly with diffusion as does the variance (standard deviation plotted in red, same trials). There's probably a sexier way to make this plot with error bars or something, I'll think about that.

Friday, December 19, 2008

Science article



The journal Science wrote an article about me (free registration req.) released in this week's edition. It's not bad -- at least it gets all the facts right which, evidenced by numerous previous experiences, is a real accomplishment in journalism, kudos to the author Mitch Leslie. The article is a "Curious Character" kind of story which is a relief as I feared that it would be a "Man loses legs, runs marathon" story. Unfortunately it has only a touch of what I hoped for which was a "Want to get into science from the outside? You can! This guy did" story.

I get asked about my odd non-academic entry into the world science all the time. And I very much hope that my example demonstrates that if you dream of playing the ultimate-nerd-sport of pure science research then just do it. Not only is it possible to enter the so-called ivory towers from the outside but it was easier than I ever imagined. My outsider’s knowledge base was both sufficient and valuable. When I got into science I thought it would take a long time before I could contribute anything. I was pleased to quickly realize that I wildly underestimated what my contributions would be.

My entry story boils down to this. I went down to UT, talked to a graduate adviser who gave me the party-line ("first get a GED, then get an undergraduate degree, then ... "). As I left that adviser's office, discouraged, I asked for a name of a professor who might be into certain subjects and he mentioned Edward Marcotte's name. I took Edward to lunch and we became instant friends because we share a huge enthusiasm for all things nerdy. After hours of geeking-out together he asked: "So what do you want to do?" and I said, "I don't know, just hang out and learn stuff." "Cool," he replied, "there's a desk. Meetings are on Fridays".

That's really all there was to it. I started hanging out in his lab and everybody seemed to assume I was a postdoc. Before long I had met several other professors and within weeks I was working on more projects than I will be able to finish in my lifetime. It wasn’t long before people were making job offers. While this episode might be a rare event based on the meeting of two like minds, I think it says something about the refreshingly open culture of science. Don't get me wrong, academic science is a human endeavor with human feelings of territorialism, etc., but in comparison to many other fields, it deserves credit as being fairly open-minded and meritocratous. After all, science is the ultimate nerd pursuit -- and nerds as a stereotype value technical achievement over prestige (not all, but many). Still, contrast it to walking into the similarly nerdy engineering department of a major corporation, say Boeing or GM, and telling someone that you just wanted to "hang out". Even if you found a friend in the company it wouldn't be long before a higher-up manager would suspect you as being a corporate spy and want you to either join the company or get out while threatening your friend with NDA violations.

Part of the openness of academics lies in the simple fact that a University is not a chartered feudal hierarchy but rather a coalition of independent lords with a governing body. (I suspect this is not so much an analogy as it is that the actual history of English academics is modeled after the post-Magna Carta arrangement of free, independent lords under royal patronage). Thus, a tenured professor or "principal investigator" (PI) such as Edward runs his lab however he sees fit -- constrained only by safety, morality, and money. That said, there are standard working procedures: undergraduates become graduate students become doctors become post-docs become professors. So, while it is very abnormal for an outsider like me to just show up out of nowhere, the system is refreshingly tolerant to such an entry.

When writing this story, the author, Mitch, called my friend Professor John Davis of the EE Department. John told me that Mitch asked: “So should we be looking for more Zacks or is he totally unique.” I said to John, “I hope you replied that there are lots more Zacks in the world!” John fell silent. “Oh no!” I exclaimed. I mean, just among my own friends I’ve already gotten three people to come into the system in ways somewhat analogous to my own entry. Thomas -- game programmer now working on molecular simulators for two labs. Mark -- game programmer and self-taught organic chemist working in another lab. Steve -- playwright turned biotech entrepreneur about to employed by the Center for Systems and Synthetic Biology. I mean, if 3 of my small circle of friends are inspired to get into science in just 5 years then there must be tens of thousands of other outsider-nerds waiting to be recruited! It’s a vast would-be nerd conspiracy! The only thing I hoped for out of this article is for those people to be inspired to action if they so choose to be and I'm not too sure that came across.

I’ve made this argument about my entry and non-uniqueness to several “insider” friends and I keep getting the same response: “But Zack, you’re so smart”. I find this response psychologically interesting. I can’t help but think that my insider friends find it easier to explain me as a freak of nature than it is for them to admit that all the expense and work they went through to get into their positions could be so easily bypassed. Of course, they well know that I studied just as hard as them to get where I am. I wasn’t born knowing things anymore than they were. But there is a difference in our paths -- I never did even one second of work I didn’t want to do while many of my grad student friends frequently (and somewhat hyperbolically) complain of being treated like slaves. So, yes, I’m smart; but I’m no smarter than my PI friends such as Edward, John, or Andy.

Indeed, Edward and I form an almost perfect experiment and control. Edward and I are freakishly similar. We are both high-functioning mildly autistic. We have eerily similar responses to many stimuli and have very similar temperaments. We both hate being told what to do. The only really significant difference in our skills is that I have dyslexia and he has whatever the opposite of that would be called (“superlexia”?). He can read 20 papers in the time it takes me to read 1. We both went to bad public high-schools although his was slightly better than mine. Had my school been a little bit better or his a little bit worse, we could easily have ended up on the other one’s trajectory. What’s different about Edward and me is mostly the path we took, not our natures. And it is why we work so well together – because we have different points of view but backed with the same intelligence and enthusiasm.

People (such as my own family) often frame my story as success “despite” dropping out of school. I find this highly prejudiced. Nobody ever seems to consider that I succeeded “because” I dropped out of school. It seems to me that our society treats school as a kind of magical elixir – a cure to whatever ails ‘ya. Poor and disadvantaged? School! Rich and spoiled? School! Curious? School! Bored? School! Let me clear -- the universal access to school is one of the greatest and most important accomplishments of our civilization. I am not dismissing the wonderful contribution of formal education to the world. That said, school is not a cure all. It is not the perfect path for everyone’s journey. To make my case, let me point out some of the advantages of my path.

First, my natural temperament is to resist doing anything I’m told to do. My mother claims I’ve been like this since I was born and that parenting me was an exercise in making me believe that things in need of doing were my idea. So getting out of school took away all of this unnecessary friction. (One can argue that I should have “just gotten over” that stubborn streak and I’d counter that if school cures whatever ails ‘ya then why didn’t it “fix” that?)

Second, by entering the workforce at 17, I started saving money very early and the compound interest on that savings is significant. While my friends went into debt to educate themselves (some are still paying those debts), I was being *paid* to educate myself. At 38 I’m in a much better financial position than my friends who went through school and that affords a lot more options such as, but not limited to, hanging out in labs, making artwork, and building pretty houses.

Third, I arguably have a superior education -- after all, I had a student to teacher ratio of one to one! While they sat in big anonymous classes I sat on the porches and couches of those same professors’ homes. All my teachers were my friends; they didn’t teach me because it was part of an institutional compact, but rather because that’s what friends do -- they hang out, they share ideas, the older ones impart knowledge to the younger while the younger impart enthusiasm to the older. That bond of friendship is much stronger than the one between a professor and a student and thus the two-way street of care and respect that is the magic of education is consequently more robust when spontaneous and voluntary.

Fourth, I never did anything I didn’t want to do. I never did someone else’s dirty work. I didn’t take any retrospectively useless classes. I didn’t worry about my grades. I didn’t suck up to any professors. I didn’t have to prove myself to arbitrary gatekeepers. I wasn’t told what to learn and more importantly I wasn’t told what not to learn. Someone once told me that I “owned” my knowledge while others seemed to “borrow” it and while I think that is overstating it, the degree to which there is truth in that statement is a result of constructing the learning path myself.

Fifth, I ended up with a broad knowledge base. My knowledge in any one field is certainly shallower than any of my friend’s knowledge in their respective fields, but I have a passing knowledge of a lot more fields. Grad school is very narrowly focused and consequently it seems to me that it is as much about indoctrination as it is about education.

The world needs lots of people that have deep penetrating knowledge of their subjects. The world also needs people who have broad but consequently more shallow views of many subjects so that they can help to bridge subjects. The educational system produces many of the first type but few, if any, of the second. Indeed, this gets me back to my thesis: I think my utility, my success, is *because* I didn’t go to school not despite it. Outsider opinions are necessary and valuable; they, ipso facto, don’t come from inside the system.

Tuesday, December 16, 2008

Artwork videos


Finally got around to updating the videos of some recent art pieces. All my videos are indexed off of mine-control.com -- the new ones are birthday, resonator, dragonfly, diffusion, and elevator goblins.

Sunday, December 14, 2008

Acid washing and starting of back wall



This morning I made the first acid wash of the planter. The acid is pretty nasty to work with. I'm probably overly-paranoid, but I get dressed in a full acid apron, face shield, and gloves. The acid wash makes it look so much better, so despite what a pain it is, it's really quite fun to see the final product emerge. It usually takes two passes to get it really clean with a power wash after each acid application. Unfortunately I don't have a power washer at the moment so I'm just going to leave it like this for a while until my room mate Aaron brings his from Houston.


Meanwhile, I also finished up the brick apron adjacent to the driveway which is where I park garbage cans on garbage days, a small detail but worth it.



Then I started on the back seat wall. First I stacked up dry bricks to work out the pattern and then starting on the first few courses until I ran out of mortar for the day. That small section is about 2 hours of work once you include mixing, cleanup, etc.

Saturday, December 13, 2008

Planter brickwork completed


I finished the brickwork on the planter this morning. All that's left is acid washing, filling with dirt, and planting. I think it took me something on the order of 40-60 hours over about 3-4 months -- but that's a guess, I don't keep track.

Friday, December 12, 2008

Fretview


Today I worked on updating an old project of mine "fretview" that is used by Rick Russell's Lab to do analysis for single molecule kinetics. We had recently improved the capture program to permit pixel binning, but the analysis code did not yet take this into account with the result that it tended to incorrectly sub-sample binned pixels resulting in rounding errors when coordinates were mapped from side to side.

Wednesday, December 10, 2008

Pattern formation phase experiments

Today I played around with trying to understand where the patterns come from in simple oscillators. In the following picture, the center region is exactly 180 phase shifted relative to the outside (circular "boundaries" as always). Note the cool reconnection events about 1/3 and 2/3 the way up from the bottom (t=0).



The interesting thing here is that the boundaries begin to oscillator faster than the surrounding regions. The center goes through 6 cycles in the time it takes the boundary to go through 7. At 7 edge-cycles versus 6 center-cycles there's a disconnection event where the two regions become disjoint and then reconnect one cycle later. These discontinuities are where interesting patterns emerge.

Why should it be that the boundary oscillates faster than the center? This is a bit counter-intuitive. Imagine two oscillators sitting next to each other and diffusing some of their energy into each other. Consider the moment when the first oscillator is at its maximum value and the second is at its minimum. At this moment the first oscillator is dumping a lot of its material into its neighbor. In other words, right when the second should be at its minimum value it is instead being "pullled forward" by the incoming flux. Conversely, by dumping flux into its neighbor, the first never quite makes it to maximum value and thus sort of short-cuts it way to the downward part of the cycle. Half a wavelength later the reverse is true. Thus, both oscillators act to pull the other one ahead and thus they both run a little faster as their amplitudes are reduced.

As noted before, work on coupled oscillators is as old as Huygens 1665 paper. Here's a more recent synthetic biological investigation from Garcia-Ojalvo, Elowitz, and Strogatz. What I haven't found yet (probably because I haven't looked hard yet) is a paper showing the spatial dynamics of such coupled oscillators as demonstrated here.

So what happens when the two regions are not started exactly 180 out of phase? Yet another interesting instability forms. Here's the same thing at 170 degrees:



This time the boundary between the two regions begins to wobble around as the two sides compete for control of the boundary space. This instability also creates interesting disconnection / reconnection events around 7 cycles. And what if we symmetry break the size of the two areas? Here's 180 degree separation with the center region being a bit smaller than the outer:



Now you see the unstable edge oscillation like above case after the perturbations travel all the way around and end up interacting with the center during the second reconnect event. Clearly such patterns are all reminiscent of diffraction scattering and other sorts of complicated spatial pattern-forming phenomena where waves are bouncing around inside of closed spaces -- I find all such phenomena hard to intuit and these examples are no different. Where things get fun IMHO is to see how noise plus such simple oscillators generates interesting formations as the ones I posted a few days ago.

Several people have asked me what is the relationship is between these simulations I'm showing here and cellular automata? I argue that these systems are analog, memoryless versions of CAs. While CAs are very logically simple, they aren't nearly as hardware simple as the systems I'm working on here. For example, Wolfram's lovely illustration of all 256 binary 1D CA rules are simple rules, but their implementation presupposes both memory and an a priori defined lattice that includes left/right differentiation. However, as Wolfram points out on page 424 of NKS, the symmetric 1D rules do generate interesting short-term random patterns when initialzied with random state so these are a good binary model for the analog systems pre-supposed here.

Meanwhile, my friend Erik Winfree's lab has very cleverly built DNA crystaline structures that have do define a lattice and thus can implement the Turing comple rules at molecular scales. But on the scale of complexity, I'd argue that these amorphous analog systems are "simpler" in the sense that I can more easily imagine them evolving from interacting amplifiers that would have independent precursor functionality and without imposing a lattice. Erik might disagree, but anyway, it's this idea of evolving-interacting-amplifiers that I'm going work on as I continue this.

Tuesday, December 9, 2008

Pattern formation sanity check



I ran a test where I changed the spatial resolution of the ring oscillator system (changing the number of spatial buckets while also changing the capacitance and conductance variables accordingly) to make sure that the pattern formation is not an artifact of the integration technique. These images show a 32, 64, and 128 bucket integration in each. It is clear that the spatial resolution matters in the sense that you can see a few small changes (features look temporally sharper, not just spatially), but I don't think that the pattern formation is an artifact. As always, thanks to JHD for help in working out the right parameter transformation -- which he knows like the back of his hand because its equivilent to a transmission line / heat equation.

Saturday, December 6, 2008

Planter progress


A little progress on the top of the planter this morning.

Friday, December 5, 2008

More AC experiments





I'm starting to have fun exploring the possibilities of this AC simulator. Above are spatially stable patterns using a bi-stable latch and small random initial conditions (approximating the noisy conditions of uninitialized amps). In the first picture, there is no diffusion so each parcel of space commits to one of the two states randomly. In the second picture, with diffusion, larger areas that by chance share a state tend to recruit their neighbors into that state. But, all of this recruitment must happen early because the gain on the latches eventually wins at which point there's no changing anyone's state (like an election). Thus, by dialing the ratio of diffusion to latch gain, you can choose the mean size of the features which is a cool phenotype all by itself. For example, imagine that this was a self-organized filter -- that one parameter could allow the construction of different kinds of mechanical particle filters.



In this picture I've stated to combine features. The left and center are two independent ring-oscillators with noisy initial conditions which create these interesting patterns as I've shown previously. (Although I'm still not positive they aren't artifacts, I'm starting to get a theory about how they form, and I'm going to be testing those ideas with controlled experiments tomorrow.) On the right is product of the two in oscillators which results in interesting spatio-temporal patterns. Like the latches above, these patterns are uncontrollable in all but gross properties because the pattern's position is the result of what amounts to "fossilized noise". In other words, the asymmetries at t=0 are amplified/converted into patterns at later time. That said, the form of the patterns is inspirational -- it hints at what is possible in potentially more information-rich initial conditions. For example, I now have an inkling how to partition space into integer sub-divisions (like fingers on a hand) without explicitly putting them there -- I'll be trying that soon.

Wednesday, December 3, 2008

Oscillator + Diffusion + Noise = Pattern


(Ring-oscillator with diffusion; x-axis: space, y-axis: time )

After an incredible multi-day pain-in-the ass getting Matlab installed, I'm able to start to explore some of the amorphous computations possible with this toy model I've been playing with. (Previous results came from running Matlab over X which was painfully slow). The above image is a simple ring-oscillator with diffusion and initialized with small random values. The random initial values seem likely in a molecular implementation whereby the inputs to the molecular amplifiers were un-initialized and therefore small stochastic deviations would dominate.

I know that simple processes can produce complicated structures as Wolfram is wont to repeat, but it's still astonishing when you see it. I mean, this thing has no clock, no memory, no boundaries, no initial conditions (just background noise) and a very simple oscillator; it doesn't get much simpler than that. I think the result is kind of beautiful, sinuous, like a tree made of waves. Maybe I'll do my next door panel like this.

All that said, I'm not positive that the patterns aren't an artifact of the integrator. Since I partition space up uniformly, it might be a result of that. I need to run a test where I reduce the spatial step and proportionately reduce the concentrations but my code isn't set up for that yet.

Sunday, November 30, 2008

Door panels 2nd panel and stain


It's taking more than an hour per panel but it feels like I could get it under an hour once I get good at it. There's nearly 100 panels in the house, this is obviously not going to be something I do all myself. I think I'll finish up this door as a prototype and then wait until I buy a big mill or have a mill shop do the rest. Either that, or hire some hourly labor for about 100 hours.

Friday, November 28, 2008

Molecular and Cellular Videos (External Link)

http://www.molecularmovies.com/showcase/index.html

OK, I thought I'd keep my blog mostly about only my projects but sometimes one runs across something really cool and blogging about it increases its Google score. My friend Eric Siegel at NY Hall of Science sent me this link to large collection of nice molecular and cellular animation videos.

I love videos like this. That said, I do have a very big complaint about the non-simulations (most of them) -- they make molecules appear to be intelligent agents. Molecules do not make deliberate choices; they do not see a complex forming and then think to themselves: "Hey, I think I'll whiz over there and insert myself into that growing structure!" For example, see the microtubule growth in Inner Life.

It is completely understandable that the animators of these videos have a hard time capturing the reality of molecules because the velocities at which things happen at the nano-scale are extremely difficult to comprehend and thus it is hard to create these animation without resorting to the "cheat" of "deliberateness". Unfortunately this cheat creates a major confusion -- I know because I remember being confused! In Segan's wonderful Cosmos series, there was an animation of DNA polymerase with its reagents all flying across the screen to assemble themselves into a growing polymer. I distinctly remember as a nine-year-old thinking: "How do the parts know where to go?" No one told me that 1) that's a great question and 2) they don't.

Here's the way animators to create these effects. They place the pieces of the model together in their final configuration and then they tell the animation program to fling all these pieces away in random directions with random tumbles. Then they simply play the animation backwards to create the effect of the individual molecules assembling themselves into the formation (that's the easy way to do it anyway). It creates the lovely assembling effect but it is a lie -- a very, very interesting lie.

Think about it -- in order for the animators to make it look like the molecules know what they're doing they have to run time backwards. That isn't merely a statement about animation -- it affords a deep insight into thermodynamics. Things which "know what they're doing" are, in effect, "running time backwards". Getting your head around this idea is the key to understanding what life is, why perpetual motion is impossible, and failing to understand it is central to many misconceptions especially among creationists.

Molecules don't know where they are going. They just thrash around randomly due to collisions. The sum of all that motion is what we call "heat" -- more heat, more violent thrashing around. If you were to put some molecules in a little pile they would bounce off each other spreading out into a more diffuse pile. Why should they spread out and not stay put or even compact themselves tighter? Because, as long as they aren't interacting with each other (we'll come back to this case) there are a lot more ways to be spread-out than there are to be compact. Scientist call this by weird name "entropy" -- it's the second law of thermodynamics: entropy (spread-out-ness) is always increasing. It's an idea that's so simple and yet so profound. Why is it true? Nobody knows; that said, try to imagine what the world would be like if it were false.

Suppose that molecules spontaneously created little ordered piles without interacting (again, we'll come back to interaction case). Those little piles are information. In other words, you could look at them and say: "Hey, there's a little pile there that shouldn't be -- since they aren't interacting they should have spread out, thus, something must have put them there." And then what? What are these little piles of spontaneous information forming? Are they spelling out Shakespeare? Or drawing a picture of a cat? Or writing out a cryptic secret that we can't read? See, it's nonsense; you can't turn it around. When you try to imagine a world that doesn't spread-out spontaneously then you end up with a world where information spontaneously appears out of nowhere and such a world would be indistinguishable from one where time was running backwards. In other words, the concepts of time and increasing entropy are the same concept.

Here's another way to think about it. Suppose that you had a tiny ball in a tube trap. Say the ball can be on either side of the tube: left or right. If the ball and tube are not interacting in some biased way then there's just as much chance that you'll find the ball on the left as the right side. Say you tried to use this tube as a memory device with the position of the ball meaning different things. You reach in and move the ball to the left side and then shut the trap and hand it to over to a friend who examines it. You shouldn't be surprised that when they open it they are just as likely to see the ball on the right as the left. This is a terrible memory device! The reader of the information might as well have just flipped a coin instead of relying on this thing to remember what you entered. How would you fix this? You'd have to glue the ball in place somehow to prevent it from moving. So, how would you glue it? There's lot of ways, you could introduce a chemical bond that stuck the ball and tube together or you could jam in a plug or lots of other clever contraptions. But every way of "gluing" will have the same requirement: it will need an investment of energy. In other words, an investment of energy is the same thing as information. If you see a pile of energy laying around somewhere then you know that such a pile potentially holds information (what that information encodes or means, that's a totally different question). And vice-versa, if you know some information then it must be that case that energy was invested to make it known. The two concepts -- information and free-energy -- are the same concept! And this explains why you can't build a perpetual motion machine. If you could then it would be creating information out of nowhere which is the same thing as time running backwards. Or, to put it another way, if you do build a perpetual motion machine then (just try) to stay the hell away from it because that thing is running time backwards!

And this gets us back to life. If it is the case that things can't spontaneously assemble then how can there be living things which are made from spontaneously assembled molecules? The fact that life is so information rich, is this evidence that something made the investment of free-energy? Yes. Shall we call this investor of free energy some sort or god or spirit or vitalistic force? That's a reasonable question, and I've seen this argument in creationist literature, but the answer is: no.

This gets us back to the videos and what's wrong with them. The videos make it appear that molecules "know" what they are doing. The seem to "know" that they should fly through space and attach themselves to some cool growing nano-machine. But they don't. What they do instead is much more interesting. They bounce all over the place without knowing squat. Why don't the spread out? They do, but they are held inside of a bag -- the cell -- which keeps them contained. When they bounce around they accidentally find molecular partners with whom they interact. This is very different than what I described before with the ball in the trap where we assumed that there was no interaction. Now, there is interaction -- they stick like glue. As described, such gluing requires energy. Where does the energy come from? It is pumped into the cell from the outside. And when the interactions break, that energy is released at higher entropy (time moving forward) and that entropy is pumped outside of the cell to keep it from poisoning the inside. Living things are devices that invest free-energy from their environment to temporarily increase the information inside of the cell. This is only possible because they have access to the free-energy; no free-energy, no life. By the way, there’s lot's of things do this, not just life. For example, a whirlpool is a pretty clearly defined "thing" that it is possible because free-energy in the form of rushing water gets trapped into a shape that then dissipates the entropy out the bottom. Whirlpools, and living things, are not "things" in the sense that they are persistent collections of molecules -- they are things in the sense that they are persistent patterns of molecules -- the molecules themselves just pass right through.

What makes life really interesting and different from a whirlpool is that it is a self-contained computational device that stores the changeable instructions to copy itself. A whirlpool's pattern is created by the external circumstances around it -- the pattern of the rocks and the waterfall. In contrast, living things internalize the "circumstances" that build them (the DNA, the proteins, etc) thus living things can be viewed as a single package that makes decisions and evolves as a computational whole. The magic of living things is that no individual part (the molecules) "knows" what it's doing (my problem with these videos) yet the ensemble does "know" what it's doing! When we casually look at a living thing we can't easily track the energy flux in and the entropy flux out and thus living things appear unique, as if they were running time backwards -- exactly the trick the animators use to make the (wrong) animations. Ha!

Thursday, November 27, 2008

Amorphous computing experiments in matlab



I've been playing with what I hope will be an interesting formulation of amorphous computing simulations involving randomly generated logic networks. I first prototyped these in C in my zlab framework but have decided to move them to matlab both to make it easier for others to work on it but also because as I move them from 1D to 2D I'll need a fancier integrator than RK45. Matlab offers a lot more ODE solvers than does my current C framework where I would inevitably have to port in Fortran solvers.

The above figures show the first test results from the matlab code. A three-node ring oscillator (that's 3 "not" gates connected in a cycle) are arrayed across space (x-axis). In both figures, the osciallator at randomly initialized (the same ICs in both images) and thus begin to oscillate though time (y axis). In the first image, there is no communication between each spatial machines so each vectical stripe oscillates in its own arbitrary phase. In the second figure, the exact same machine and ICs are now allowed to exchange information through space by diffusion and you can see that there is a rapid phase alignment between the vertical stripes. Think of it like this: each machine is now trying to recruit its neighbors into its phase. At start, by chance, there will be some neighbors who happen to have similar phase and thus they will be able to dominate their neighbors and bring them over to their phase resulting in a larger dominating force and thus making it easier to dominate even more neighbors, etc, until the whole space phase synchronizes.

This effect has been known for centuries -- it was described by Huygens in 1665 when he noticed pendulum clocks hung on the same wall phase-synchronizing because they could communicate by vibrating the wall. Here's an article about a nanomachine that does the same thing.

Lots more of these results to come now that I have the basic matlab framework built. Early indications are that some interesting things are possible.

Tuesday, November 25, 2008

Kinetic Explorer v2.0 Released



http://kintek-corp.com/kinetic_explorer/

This is a reaction simulator and data fitter project that I started years ago with Ken Johnson and Thomas Blom. We have just released version 2.0 which includes substantial improvements in the integrator and includes a nice tool for viewing the parametric fit space. After playing with this for years now, I'm convinced that the major problem with fitting tools is that it is incredibly easy to fool yourself into believing that you have a well constrained system when you don't. In this and the upcoming version we've put enormous effort into a UI that can demonstrate if a system is well constrained and if not, why. Thanks to a lot of effort by my bestest-nerd-buddy John Davis, v3.0 will have a brand new super-optimized fitter that uses singular value decomposition to dramatically improve the fit descent and also provide instant feedback on the system's condition rank in signal to noise units.

Door Panels -- Milling experiments



I started on milling a prototype panel. I thought that choosing a panel with large simple geometry would be the easiest, but I was wrong. Because the cuts were larger than the router base, I kept having to shove in awkward pieces of thin plywood as a scaffold to replace the support lost from the removed material. I experimented with using different depths of cut for the different branches but it was unnoticeable so I instead experimented with a dado line between branches to mark a visual boundary but I think that this too is unnoticeable so I will abandon that in all the future cuts. This prototype ended up pretty ragged but it is the bottom panel of a workshop door so I probably won't bother replacing it. The picture shows the panel before staining.

One unexpected thing that I like is that the inner layers of plywood have defects and I think that the knots add to the feeling of the tree I'm looking for.

Door Panels



Last night I started on door panels. I had wanted to do something decorative with the doors but during construction it became clear this project would have to wait. Bruce did a beautiful job on the door fabrication and made it so the panels were easily removable. Tonight I began by reviewing my pics folder for trees and I found a nice one of my backyard pecan taken last winter. I setup the projector and then traced various branches onto the doors.

Tracing a natural object is a really good exercise. What you think a tree looks like in your mind and how it actually looks are so different that it's quite stunning. The cartoon vision in one's head has branches always spreading upwards and outwards. When I close my eyes I can see knotty branches but when you actually trace it out you realize that only the major branches are knotty enough to notice. In my cartoon vision, the smaller branches fork fairly regularly but tracing it makes you realize that it is actually very haphazard. Trees and not the result of a nice simple construction rule executed recursively but rather of construction and *destruction*. Without the fallen and broken branches trees look simulated.

I'm thinking of varying the depth of the cuts and hence the numbered sections. As usual with my projects, I have no idea if this is going to work. If it doesn't, oh well, I can replace it with just a sheet of plywood.

Monday, November 24, 2008

Planter masonry









I've started on the planter masonry in the front of my house. I want a roughly exponential spiral that appears to have somehow grown naturally in place. I also want it to afford a comfortable conversation with the front steps. Unfortunately I didn't realize how unlevel this part of the sidewalk foundation was until after I started laying bricks. I had wanted it a little out-of-level to promote drainage but it is a lot more noticeable that I thought it would be -- as you walk up from the neighboring house you're eye compares the lines of the planter to the front porch brickwork and it is very clear that they are out of sync. So I've change the design so that top part of the planter is deliberately crooked to exaggerate the effect in a "if you can't beat it, embrace it" design. Inevitably this limitation pushed me to a new place I wouldn't have gone and I like the new design better in some ways.

This is something that I really like about masonry -- you are forced to commit. In many ways, masonry is the exact opposite of software engineering. Software has a neverworld feeling: it is light, squishy, virtual, and totally forgiving -- if you screw up you just revert the version control. Masonry is heavy, real, and completely unforgiving -- when you screw up you either live with it or get out a sledge hammer. (One step I didn't like took me over a week of pounding with a hammer and chisel to remove -- about twice as long as it took me to build it in the first place.) Software's undo button permits a kind of intellectual laziness where anything that isn't exactly how you imagine is cast as merely a "bug" awaiting correction. With masonry you are forced into finding ways to convert mistakes into features. It is challenging but creatively healthy. I find myself everyday sitting on my porch for a few minutes staring at this pile of bricks and moving them around trying to decide what happens next. Then I lay the next course and think again.