Wikipedia is a wonderful place. First the definition of Viking Metal. And then a list of bands.

Here’s one of Bathory’s videos (only video?) entitled One Road to Asa Bay. Go Vikings!

]]>Proof:

Suppose first that T is injective. Then implies , so .

Now suppose . Then implies which implies , and . So, T is injective.

]]>

Proof:

Since f is a homomorphism, and since , an Abelian group, then:

(by associativity)

(f is a homomorphism)

The last line is true because (but that’s another proof).

]]>Michael Atiyah gives a presidential address on Mind, Matter, and Mathematics (good alliteration). In it he discusses the difference between mathematical philosophy and natural philosophy. It’s an interesting read throughout.

But, near the end he says:

Mathematical physicists believe that there are indeed simple and beautiful mathematical equations that govern the universe, and that the task of the scientist is to search for them. This is an article of faith.

An alternative faith is to believe in a God who created the universe and kindly provided us with laws or equations that we would be able to understand.

He touts that these are compatible philosophies. As faiths, they are similar (I take issue with the first idea: Mathematical equations do not “govern” the universe, they are just really good at representing it.)

But, more importantly, I disagree with the idea that belief in God is always compatible with science (an implication I think he was making). In physics, it’s an easier sell. There is nothing alive in physics.

A harder sell is in biology. Belief in God is one thing, but belief in a soul is problematic. If one believes in a soul, that every human (homo sapien) is singled out from among God’s creatures as different (better), then all of biological evolution (and what it can tell us about who we are) falls apart.

If it is true that humans ARE totally and fundamentally different than all of the other creatures on earth (and potentially on other planets), and if this is due to our having a soul, then we MUST abandon much of what we believe to be true in biology.

If biology is right, then we are not different in any fundamental way than other species. Unique, sure. (So is the norwhal.) But, not totally different.

I am hesitant to say that a belief in a God-given soul is compatible with biological science, and from there science generally.

(HAT TIP: Noncommutative Geo)

]]>I’m gonna try and post a few times a week, including POD’s and other tidbits.

]]>I had hoped to be posting something weekly about *all* of my classes, but after getting started on this post last weekend and not taking it any further, I’ve decided to just make one math-related post a week. Later today or tomorrow I am going to put up an explanation of l’Hospital’s rule, which gives us a way to calculate limits that would otherwise give rise to what are called ‘indeterminate forms’. Sometime over this week I’ll throw in a note about where Bessel functions come into play, and next weekend I’ll catch up with some measure theory from the Stochastic Processes class.

ciao for now!

Felicis

]]>*While I’ll also try and post on these topics generally in other articles, the POD’s will consist of simply interesting, important, or fun proofs for their own sake. Proofs are pretty.
*

**A ring has the cancellation property if and only if it has no zero divisors.**

Proof:

Part I.

Suppose R is a ring that has the cancellation property. Then for , implies . Now, let such that . Further, suppose a is a zero divisor. Then, there exists a such that , but . Then, . But, since R has the cancellation property, then this implies that . This is a contradiction. Therefore, if R has the cancellation property, R has no zero divisors.

Part II.

Suppose R has no zero divisors. Let , and let , where . Then, . This implies that . Since , then must equal 0, which implies . So, R has the left cancellation property.

Similarly, if , and , then which implies , so . So, R also has the right cancellation property. Therefore, if R has no zero divisors, then it has the cancellation property.

]]>This quarter I am teaching Calculus II; finishing up some applications of derivatives and then moving on into ‘anti-derivatives’, or integral calculus. I am very excited because I am moving into a new area of teaching, both in subject matter and in *how* I’ll be covering the material. I’ll try to put up some posts about that later.

I am also taking three classes.

Stochastic Processes and Probability Theory II: a continuation of the fall course. We’ll be covering integration in this course also, though from a far more abstract level than what I’ll be covering in my calculus class! We will also use this theory of integration to examine the moments of a random variable, as well as some applications. Expect more to come! One of the coolest things about graduate school (to me) is how we are getting to the point where everything is starting to fit together!

Partial Differential Equations II: The last course on this subject until next year. After going through the basics of PDEs last quarter – which is to say looking at systems of two or maybe three variables and learning methods for soling PDEs with boundary conditions (of first and second order – and mostly linear in one or both variables) – we now get to look at higher dimensional systems, problems with infinite domains (so one or more ‘boundary’ is now at infinity), and a couple of numerical methods for ‘solving’ PDEs. I am really looking forward to this class!

Finally, Graph Theory I: The fall course was not graph theory, but combinatorics, so I took genetics instead (having had combinatorics already). Given the connection between graph theory and dynamic systems, I am looking forward to getting some good tools to work with! The application to neural networks is obvious, so expect to see more from this subject too!

That’s about it for now! I hope that everyone had a good break. I’ll add an update next week!

ex animo-

]]>The biggest lesson I learned was not to try teaching two classes while taking a full load of graduate credits. And buying a house. And moving… Yes, I took on a bit too much, and my posts necessarily suffered.

Winter quarter will be different!Aside from having more time, I’ll be using my time a little more efficiently! But – we’re here for math, so let’s discuss some of the cool things I learned in mathematics this quarter:

*Stochastic Processes* (that is, random processes) are cool. Oh yes – moreover, the analysis that goes along with stochastic processes has really improved my understanding of real analysis! The key to them both lies in *measure theory* (well – *one* of the keys…)

What’s measure theory, I hear you ask? Well – the basic idea is to look at a particular kind of function, called a measure, that maps the subsets of a space to the real numbers (or extended reals, wherein we get to consider infinity and negative infinity as numbers). In a stochastic process, the measure is onto the interval [*0,1*], and it is called a *probability measure*. Intuitively, length is a measure – it takes a subset of the real number line (an interval) and returns a real number (its length). That’s fine for something nice, like an interval, or even a collection of intervals, but we want the collection of subsets to be a *sigma field*, so that we can combine the subsets in the ways we normally combine sets; union, intersection, complement… And we want the measure to extend to cover these newly formed sets – even when we take the union of a countable (that is, infinite) number of sets.

It turns out that a measure on a field can be extended uniquely to be a measure on the generated sigma-field. That is useful and cool, and I’m going to stop here to avoid having to recapitulate the entire quarter’s work!

My second course was Partial Differential Equations, and was much more about the nuts and bolts of how to solve special instances of such equations, rather than about a lot of theory. Interestingly enough, unlike ODEs, there is not a general theory for PDEs! Thus what you mostly get is recipe solutions for special forms that cannot then be extended to cover other types of PDEs. Sad, but true.

That is not to say that there was *no* theory! There are still things that can be discussed at a higher level, and I am really looking forward to the advanced differential equations course next year! The one big thing we looked at (in a theoretical sense) was the Fourier coefficents. Interestingly, we can look at a generalized function space as a vector space, having a (countable) basis made up of orthogonal sine functions, sin(x), sin(2x), etc. Let’s take some arbitrary function f(x) and attempt to approximate it with a (finite) series of sine functions of this form. We adjust the coefficients to minimize the ‘distance’, and it will turn out that the coefficients that minimize the error in the approximation are exactly the Fourier coefficients of the sine-series expansion of f(x). That’s pretty neat!

It does turn out that with boundary conditions, if the related ODEs (created by the method of separation; assuming u(x,t) can be written as f(x)g(t)…), have a solution, then we always have a finite or countable number of orthogonal functions in the series expansion of the solution. As to why – well, that’s going to have to wait until next year!

That’s it for now – more to come in the new year!

]]>