Welcome to the mathematics reference desk.
|
Choose a topic:
See also:
|
Contents
July 29
August 9
August 11
August 14
Parallel curves in geometric algebra?
Is there a way to represent the problem of parallel curves in geometric algebra? JMP EAX (talk) 22:59, 14 August 2014 (UTC)
- I haven't gone through the exercise, but intuitively seems straightforward: start with a curve in a two-dimensional space described as a position vector function of some scalar parameter. Finding the tangent vectors of the curve requires differentiation of the vector function. Aside from that, normalization, rotation and scaling are straightforward operations in geometric algebra, and these can be used to find a parallel curve. Maybe not quite what you asked. It would be interesting to determine a curve is always parallel to its parallel – not answered by the article as far as I can see. —Quondum 01:59, 16 August 2014 (UTC)
- Yeah, the differentiation (tangent) is probably the non-straightforward part. There's a (relatively recent) book that probably covers that, but I haven't read it: John Snygg (2011). A New Approach to Differential Geometry using Clifford's Geometric Algebra. Springer. ISBN 978-0-8176-8282-8. JMP EAX (talk) 20:22, 17 August 2014 (UTC)
- The differentiation is straightforward: it is the derivative with respect to the single parameter. No partial derivatives needed; no different from normal vector calculus. Snygg seems to cover the concept in §7.1 as in a sense a velocity along the curve. If the position vector is expressed in two components, the components of the tangent vector can be found as the derivative of each component with respect to the parameter. —Quondum 05:52, 18 August 2014 (UTC)
- Yeah, the differentiation (tangent) is probably the non-straightforward part. There's a (relatively recent) book that probably covers that, but I haven't read it: John Snygg (2011). A New Approach to Differential Geometry using Clifford's Geometric Algebra. Springer. ISBN 978-0-8176-8282-8. JMP EAX (talk) 20:22, 17 August 2014 (UTC)
August 16
Knuth's up-arrow notation
How much money is being paid by the check in this xkcd comic? I found Knuth's up-arrow notation but didn't understand it. 00:22, 16 August 2014 (UTC) — Preceding unsigned comment added by 71.79.68.10 (talk)
- It is an extremely large number. It is way to big to write the digits. In fact, it is proabably way to big to even write the number of digits. Offhand, I'd guess that the number of digits in the number is way bigger than the number of atoms in the universe. Bubba73 You talkin' to me? 00:46, 16 August 2014 (UTC)
- Even the number of digits in the number of digits in the number is larger than the number of atoms in the visible universe, as is the number of digits in the number of digits in the number of digits in the number, and so in. In fact, the number of times I would have to prepend "number of digits in" before the number could be written down is larger than the number of atoms in the visible universe. As is the number of digits in it. -- BenRG (talk) 01:33, 16 August 2014 (UTC)
- It's even bigger than that: The number of digits in the number of times you would have to prepend "number of digits in" before the number is larger than the number of atoms in the visible universe. Sławomir Biały (talk) 12:01, 16 August 2014 (UTC)
- Can anyone figure out what the third term (i.e. S¤¤(1000)) on the right is meant to be? —Quondum 13:59, 16 August 2014 (UTC)
- I was only focused on the term with the arrows. I thought maybe the S thingie was something to do with states in statistical mechanics, although it would be quite amusing if this was actually some stupendously small number, rendering the amount on the check some trivial number of cents. This seems unlikely though. Sławomir Biały (talk) 14:10, 16 August 2014 (UTC)
- When the general context is xkcd humor, anything could be the case. But in the context of the specific xkcd what if page, it seems that a large number is intended. —Quondum 15:15, 16 August 2014 (UTC)
- It may be SBB, referring to the busy beaver function. -- BenRG (talk) 18:39, 16 August 2014 (UTC)
- Sounds right. In this case, not only is it a huge number, it's also probably non-computable. -- Meni Rosenfeld (talk) 22:19, 16 August 2014 (UTC)
- Not sure exactly what you mean by that. Every natural number, of course, is a computable number well, that is, its image under the natural map into the reals is, hi Bo.
- Maybe you have in mind some sort of feasible computation, something that can actually be done within the bounds of physical realizability? In that case, we have other problems, because (say) the decimal representation of the number isn't one that can be feasibly written down, never mind computed, which makes it sort of trivial to say it can't be feasibly computed. If you allow more compact representations, then what's wrong with one that just represents it in terms of the Busy Beaver function? That representation is certainly computable.
- So not saying you're wrong, just that it's not terribly clear what the claim means. --Trovatore (talk) 22:44, 16 August 2014 (UTC)
- Hmm... I'm not sure myself what I mean, but it's certainly not that it's not feasible in the physical universe (that could be said about the Knuth arrow number here, but I wouldn't call it non-computable. Given a classical computer and a huge, but finite, amount of RAM and CPU time, I can tell you the value of any digit you please of this number).
- Now that you say it, it does sound obvious that every natural number is computable, but since the BB function is not computable, it makes sense to me that some of its values (such as
) are non-computable in some way. Maybe not that the number itself is not computable, but that it's impossible to compute the equivalence of its BB representation and its more usual representations, such as binary expansion.
- Maybe something along the lines of "there exists an integer
such that for every
, there is no proof in ZFC that the nth binary digit of
is m"? (where, when we make this statement formal, we don't substitute the actual number for S(1000), we write down the definition.) Does that make sense?
- Or maybe I'm just wrong and there is no sense in which particular values of the BB function are not computable. -- Meni Rosenfeld (talk) 07:53, 17 August 2014 (UTC)
- It makes sense, although I'm not sure it's true at 1000. It's certainly true for some
, purely on the grounds of noncomputability; otherwise, we could compute
by enumerating all ZFC proofs until we find the one that gives us the value of
on our desired input. In fact, we can construct a
where it holds: let
be the machine that searches for a ZFC proof of
, then outputs the Gödel number of that proof. Let
be the number of states in
. Then a ZFC proof of
would allow you to prove consistency of ZFC simply by checking that none of the first
proofs prove
.--80.109.80.78 (talk) 08:43, 17 August 2014 (UTC)
- It makes sense, although I'm not sure it's true at 1000. It's certainly true for some
- Sounds right. In this case, not only is it a huge number, it's also probably non-computable. -- Meni Rosenfeld (talk) 22:19, 16 August 2014 (UTC)
- I was only focused on the term with the arrows. I thought maybe the S thingie was something to do with states in statistical mechanics, although it would be quite amusing if this was actually some stupendously small number, rendering the amount on the check some trivial number of cents. This seems unlikely though. Sławomir Biały (talk) 14:10, 16 August 2014 (UTC)
- Can anyone figure out what the third term (i.e. S¤¤(1000)) on the right is meant to be? —Quondum 13:59, 16 August 2014 (UTC)
- It's even bigger than that: The number of digits in the number of times you would have to prepend "number of digits in" before the number is larger than the number of atoms in the visible universe. Sławomir Biały (talk) 12:01, 16 August 2014 (UTC)
- Even the number of digits in the number of digits in the number is larger than the number of atoms in the visible universe, as is the number of digits in the number of digits in the number of digits in the number, and so in. In fact, the number of times I would have to prepend "number of digits in" before the number could be written down is larger than the number of atoms in the visible universe. As is the number of digits in it. -- BenRG (talk) 01:33, 16 August 2014 (UTC)
I thought something like that might have been what you (Meni) had in mind.
Yes, absolutely, it could be the case that you can't prove in ZFC what the value of BB(1000) is. I don't know whether that's actually the case or not, but it may well be. Definitely there is some N such that you can't prove in ZFC what the value of BB(N) is.
That does not, however, make the value non-computable. It just means that ZFC is insufficiently strong to decide what it is. There is a real answer, Platonistically; a particular first-order theory just might not get you there.
There is no such thing as a non-computable natural number, or a non-computable value of a function from the naturals to the naturals. Non-computability applies only when you have infinitely many values to produce. Any finitely many natural numbers are always computable.
Here's the confusion, maybe. Computability is not about justifying an answer. It's only about whether there can exist a program that correctly finds the answer, whether the program is justified or not. The program might produce the answer completely by accident, as it were, and it would nevertheless witness computability. --Trovatore (talk) 09:26, 17 August 2014 (UTC)
- Going back to the psychology/intent of the xkcd strip: I find it interesting that it used a product of three big numbers, each of which seems to require a deeper level of understanding of its sheer size, perhaps so that the smaller numbers fill in where understanding of the larger number(s) is just missing. There is also an interesting irony in the strip: that the cheque (check) would be inherently valueless, something that is not alluded to in the strip but could not have escaped Randall. I am reminded of a challenge run by Scientific American decades ago: readers were to mail in a whole number, and the reader submitting the largest number would receive $1,000,000 divided by the sum of all received numbers (or something to that effect). They even pointed out the optimal strategy. Apparently some of the numbers submitted were of the same ilk as on the xkcd cheque, so no money needed to be paid. —Quondum 15:24, 17 August 2014 (UTC)
- That was the Luring lottery. Some people did submit the largest number they could manage to define on a postcard. Hofstadter wrote in his next column about how sad that made him because it showed how blindly selfish some of his readers were, in contrast to his superrational self, of course. I think the selfish ones were the people who tried to extract a substantial payout from SA (that it could ill afford) while doing nothing of value themselves. The people who submitted large numbers saved SA from a potentially difficult situation, and some of them were apparently pretty creative about it, unlike the people who followed Hofstadter's instructions. Anyway, getting back to the check, yes, it didn't make sense. -- BenRG (talk) 22:37, 17 August 2014 (UTC)
- Interesting point. I have never agreed with Hofstatder about "superrationality", but this is an angle that I can't recall ever occurred to me.
- (Would a mil actually have hurt SA that badly, even if they'd had to pay out the whole thing? I guess a million dollars was a lot of money back then.) --Trovatore (talk) 23:33, 17 August 2014 (UTC)
- That was the Luring lottery. Some people did submit the largest number they could manage to define on a postcard. Hofstadter wrote in his next column about how sad that made him because it showed how blindly selfish some of his readers were, in contrast to his superrational self, of course. I think the selfish ones were the people who tried to extract a substantial payout from SA (that it could ill afford) while doing nothing of value themselves. The people who submitted large numbers saved SA from a potentially difficult situation, and some of them were apparently pretty creative about it, unlike the people who followed Hofstadter's instructions. Anyway, getting back to the check, yes, it didn't make sense. -- BenRG (talk) 22:37, 17 August 2014 (UTC)
August 18
Determining relative advantages of traits
So I'm comparing the fitness of organisms. I can take an organism with a collection of traits (say, a, b, and c) and compare it to an organism with another collection of traits (x and y) and see which has greater fitness. I can never compare individual traits. If I've got a sample of a bunch of matchups (not necessarily every possible combination of a given number of traits), how do I go about finding which individual traits, which combinations, etc. best promote organism fitness? --2404:2000:2000:5:0:0:0:C2 (talk) 04:23, 18 August 2014 (UTC)
- Okay here's a tentative idea I had: represent each matchup as an inequality (e.g. a+b+c-x-y>0) (note there are only finitely many traits), and I get a system of inequalities. The more samples, the narrower my solution space. But this works under the assumption that each trait contributes a fixed value to fitness, and there's no added value from particular combinations. But it might work reasonably well anyway I think? --2404:2000:2000:5:0:0:0:C2 (talk) 04:56, 18 August 2014 (UTC)
-
- You need to figure out what fitness function you want to optimize, and how the traits contribute to it. For example, what's the better deal? A 2L bottle of soda for $5, or a 500mL bottle for $2? The 2L has a better per-unit cost ($2.5/L versus $4/L) but if you're only going to drink 400mL, with the rest going flat and being dumped out, the 500mL is a better deal, as total cost is better ($2 versus $5). Add in the fact that the 500mL bottle is refrigerated and the 2L isn't (how much is cold soda worth to you?), or perhaps that the 2L comes in your favorite flavor, but the 500mL doesn't, and you have a complex optimization problem that can't just be reduced to a simple "what's the price per liter?". There isn't any single answer as long as you're unclear on how the traits relate to each other, fitnesswise. - What you need to do if you really want to compare fitness is convert each independent trait to a single numerical value that's consistent within and between each other trait. That is, give each trait a certain number of "points", and for each organism sum the number of "fitness points" it has. So you may have 10 points for being cold, but 12 points for coming in your favorite flavor. You can get more complex if your want - trait a and trait b each get so many points individually, but when they occur together, they get a certain number of points for bonus (or penalty). The fitness function need not be a linear combination of the individual traits. - If you can't convert the values to a single consistent "point" scale, you may want to look into Pareto optimization. That is, you can't necessarily rank all the organisms in single file, but you can say "this set of organisms, when taken as a group, have a better fitness than any of these other organisms". Even then, though, you'll want to reduce your dimensionality as much as possible by combining those traits that can be combined into a reduced number of metrics, and if you want to rank organisms on the Pareto front, you'll need to determine a fitness function. -- 160.129.138.186 (talk) 18:39, 18 August 2014 (UTC)
- It depends on what you want to do, and there isn't one general answer. I'll assume you are interested in concepts from real biology. One perspective is that traits are only adaptive traits if they lead to a persistent population, i.e. the species/strain doesn't go extinct. Then "fitness" only makes sense in terms of the processes that affect population biology, things like reproduction, predation, resource competition, and so on. This is basically the stuff in Fitness_(biology).
- In theoretical ecology, we often use the the long term low density growth rate, as described here [1]. The basic idea is that populations that can reliably grow back from low density while competing with others will not go extinct. If all types can do this, we can infer species coexistence. This LTLDGR can be measured empirically if good data is available, or calculated from a model. Another perspective on fitness is to take the expected value of the number of "grandchildren". Again, this can be computed for a population if data good, but will otherwise need a model. Really, there are just a lot of ways that "fitness" can be quantified, and what is best/most useful depends on the details of your problem. As for your suggestion of just adding together numbers, that is not something that makes much sense. Not only does that disallow interactions between traits, but it also assumes that all traits are somehow on comparable scales... Anyway, if you'd like to know more about any of the stuff I wrote up just ping me, I'll be happy to give more detail/refs. SemanticMantis (talk) 22:50, 18 August 2014 (UTC)
-
- As regards "what I want to do": after taking a sample, I'd like to be able to predict which of two collections of traits has greater fitness. --2404:2000:2000:5:0:0:0:C2 (talk) 23:55, 18 August 2014 (UTC)
- The "good" and useful answers for you will be very different, depending on your application. E.g. a highschool project on mice is very different from a research paper on theoretical ecology, which is very different from a video game that uses evolutionary ideas. Unfortunately your response could be true for any of those scenarios, but I'll still try to explain.
- As regards "what I want to do": after taking a sample, I'd like to be able to predict which of two collections of traits has greater fitness. --2404:2000:2000:5:0:0:0:C2 (talk) 23:55, 18 August 2014 (UTC)
-
-
- If you want to "take a sample" and see which type has "greater fitness", you have to have either (a lot of) data from the real world or a model of some sort. There are a few definitions for measurements at Fitness_(biology)#Measures_of_fitness, and I mentioned the LTDGR above. There are a few others, but they all revolve around the dynamics of the population over time. So, you have to quantify how much each genotype or phenotype contributes to the next generation. The entire notion of "fitness" in biology is about survival and reproduction. No trait is "better" or "worse", except in that context. There is no biologically meaningful way of measuring fitness of traits in a vacuum. You need to account for, at minimum, survival and reproduction over time. This is somewhat a problem of terminology, as this has very little to do with some other uses of the word "fit". I understand this is the math desk, but this is an area of evolution/ecology that actually has a lot of mathematical tools and techniques. If this is just for some small project or video game or something, you can make up whatever you like, or follow some of the other suggestions. If you want your answer to make sense in terms of the modern understanding of fitness in biology, you have to use one of the established mathematical frameworks. I apologize if I sound harsh, but I want to make it clear that this is a well-studied area of research, and making something up ad hoc might not be very useful. SemanticMantis (talk) 15:47, 19 August 2014 (UTC)
-
- If you assume all the traits are independent, then you can do the following:
- 1) Compare all the cases with trait "a" to all those which lack trait "a", even though each of the two groups will contain a mixture of every other trait.
- 2) Repeat for trait "b", "c", "x", "y", and any other traits.
- 3) For each trait, you will then have either a positive or negative differential, or no significant difference in fitness, based on whether that trait is present or absent.
- 4) You could then conclude that the combination of all the positive traits, and absence of all the negative traits, is the most fit organism.
- However, again note that this assumed that all the traits were independent. You may very well find this not to be the case. For example, cats with white fur and blue eyes might be desirable, but such cats also tend to be deaf. So, if this turns out to be the case, things get more complex. StuRat (talk) 00:26, 19 August 2014 (UTC)
August 19
Thermal resistance from a wire through a disk
Please see attached diagrams in http://i62.tinypic.com/2rdg6yg.jpg
The flow of heat through a solid is governed by the solid's thermal resistivity, R, a property of the solid. For a simple bar shape, the flow of heat from one side to the opposite side as shown in Fig 1 is determined by the particular shape's thermal resistance, R, determined from R and the length, width, and height by the simple formula shown.
For a perfectly heat conducting wire or rod passing through a thermally resistive disk, and in perfect contact with the disk, the thermal resistance from wire to disk perimeter is also given by a standard formula: R = R Ln(Rd/Rw)/(2 Pi T) - See fig 2 in the attached file.
I need to find the thermal resistance from a wire to the disk perimeter edge where a pear-shaped hole is cut into the disk, so that the wire makes contact (assumed to be perfect contact) only over a limited angle, as show in Fig 3. The area of the hole can be assumed large compared to the cross section of the wire but small compared with the disk area. Assume heat only flows through solids, and not across any gap, and that there is negligible radiation.
How can I get or derive a formula, either approximate or exact? 124.178.49.97 (talk) 10:13, 19 August 2014 (UTC)
- While the original problem has cylindrical symmetry which makes for a simple analytic solution (see for instance Thermal conduction#Cylindrical shells), with the cutout getting an analytic solution would be more challenging. For more complex geometries like this, it may be best to numerically approximate the solution to the heat equation. Energy2d is a free program that may be sufficient for your needs, but there are many full featured commercial programs, too. --Mark viking (talk) 21:37, 19 August 2014 (UTC)
- Well yes, the solution for a plain disk is in almost any textbook on heat flow - I gave it in the drawing I attached. Yes, a numeric solution can be used. But a formula would be MUCH more convenient, and it need not be an exact solution. Within 20% or so would be quite good enough. 124.178.49.97 (talk) 01:30, 20 August 2014 (UTC)
Components Exceed The Average Squared
The following problem has come up in something I'm working on and I can't seem to find a good way to solve it. For reals a1,...an so 0 <= ai <= 1, when is ak >= Average(ai) ** 2 for all k? A specific answer would be wonderful, though, even if the set could simply be characterized, I would be much obliged. Ideally, I need to find a continuous transform of the unit n-cube into the solution that moves around the points as little as possible (various other constraints as well, but I'm not worrying about that at the moment). If the above is simply solved, I'd be even more interested in the general problem: given ai and linear combinations f1,...fm of them, when is ai >= f1 * f2 *...fm for all i? Thank you for all help - this problem is outside of what I'm good at, so any help is quite greatly appreciated:-)Phoenixia1177 (talk) 19:00, 19 August 2014 (UTC)
- If I understand you correctly, any constant sequence will do:
gives average of
and
. Am I missing something...? --CiaPan (talk) 20:15, 19 August 2014 (UTC)
- Phoenixia1177 wants a classification of all such points, not just an example. Is it obvious that the set is even convex?--80.109.80.78 (talk) 20:45, 19 August 2014 (UTC)
Simplification of bincoef series?
Keeping in mind that
and
,
is there a similar simplification for
?
I do know that .
- ~Kaimbridge~ (talk) 21:01, 19 August 2014 (UTC)
- Given that the last series converges to an irrational it's unlikely there would be a closed form for the partial sums. Note that it's a variation on 1/1-1/2+1/3-...=ln 2. You could perhaps get an asymptotic formula for the partial sums using the Euler–Maclaurin formula. --RDBury (talk) 04:14, 20 August 2014 (UTC)
- The formula is
where the values of harmonic numbers of fractional argument are computed with the help of this formula. — 79.113.222.166 (talk) 09:51, 20 August 2014 (UTC)
- While my version of Mathematica seems to find this summation easy (and verifies your answer, 79.113.222.166), I can't see how to prove it myself. Could anyone enlighten me? 129.234.186.11 (talk) 14:42, 20 August 2014 (UTC)
- You should be able to prove it by induction using the properties
and
found in this section. Egnau (talk) 16:28, 20 August 2014 (UTC)
- You should be able to prove it by induction using the properties
- While my version of Mathematica seems to find this summation easy (and verifies your answer, 79.113.222.166), I can't see how to prove it myself. Could anyone enlighten me? 129.234.186.11 (talk) 14:42, 20 August 2014 (UTC)
- The formula is
August 20
Deriving the Taylor series of the secant function
Basically, how does one show that the explicit formula(s) of the Euler numbers (given in the article) define the coefficients of the Taylor series of the secant function (up to a change in sign)?
One guess I have is to take advantage of the integral definition of the inverse secant, , expand the square root with a binomial series, antidifferentiate, and then apply the Lagrange inversion theorem, but I do not have experience applying that.
Once this series is found, the Cauchy product of it with the series for sine produces the tangent function's series.--Jasper Deng (talk) 01:40, 21 August 2014 (UTC)