• ## Pages

 bob on Proof that Humans Are Evi… Mike on Viking Metal Meets Viking… sir on Set Algebra: A Quick Reference… Crystal on Proof that Humans Are Evi… Felicis on Viking Metal Meets Viking…

## And Speaking of Biologists and Mathematics

Well, I see that Nick has been a little more active this last month.  Thus, it is with great guilt that I try to step beyond my laziness and post something!

First, some announcements:

• I am going to try keeping the Mathematical Biology Seminar going this Fall quarter, though the meetings will be every other week.  My plan is to focus on Neural Network modelling and the use of NEURON software to implement some simple network models.
• I would also like to keep the old CAMG meeting going, assuming there is any interest.  Drop a comment is you’d like to join us in exploring $\LaTeX$, as well as some other software.

One of my summer projects is to look at the 1993 paper, “For Neural Networks, Function Determines Form” by Francesca Albertini and Eduardo Sontag. An interesting aspect of this paper is that for a certain kind of neural network (that is, when the network can be modelled by a particular system of ordinary differential equations), then a particular input/output behavior completely characterizes the weights of the connections between the neurons.

This follows from the form that the ODEs take and that the input functions are going to be continuous.  Then we find a kind of equivalent to the existence/uniqueness theorem seen in any basic differential equations course.

The first question that comes to my mind is: are these ‘neurons’, as modelled, biologically relevant?  That is, do ‘real’ neurons behave that way?  Certainly, real biological networks can (and have) been modelled this way – though those networks tend to be networks using electrical synapses (gap junctions) rather than chemical synapses, so the ODE model is actually quite an accurate depiction of the electrical circuit.  My concern is with those networks using chemical synapses and the propogation of an action potential down an axon.  This introduces a time component into the model that, I think, needs to be addressed.  Of course, it *might* be addressable by appropriately adjusting the network weights.

The second question has to do with experimentation: What is the least amount of information the biologist needs to acquire to determine the network weights.  How much error is introduced by experimental method?  Perhaps an example is in order:  Imagine that you are studying the hearing pathwayin a mouse.  Technically, the network you are looking at is the entire brain (since we are keeping the mouse alive), though perhaps we can limit the ‘network’ to that of the hearing pathways.  But our ‘input’ is not a nice function we plug into a neuron with a signal generator, it’s a recording of a mouse squeak.  So we aren’t sure exactly what the inputs are, and our output is what we record from a single electrode placed into the mouse’s brain.

Certainly we can get some information this way, but what network information can we develop?  Even given a far simpler network with a well-defined input, what information about the network topology can we derive from the input/output behavior?

Also, can a ‘large’ network emulate a ‘smaller’ one?  What do ‘large’ and ‘small’ even mean in this context?  Number of neurons?  Number of connections (synapses)? Since we are looking at directed networks (paths tend to be one-way), how many recurrent (feedback) connections are there?  This is going to be my focus for a while, so expect more on this subject as I work my way through the paper.

Of course- there is more to talk about than just neural networks!  Also, I’ll spare many of the details of my analysis here (though probably I’ll put them up *somewhere*) and focus on the results.

That’s it for now!

ex animo-

Felicis