🧵 /mg/ - mathematics general
Anonymous at Tue, 15 Oct 2024 05:14:42 UTC No. 16432396
q-analog edition.
Talk mathematics
Previous thread is >>16397584
Anonymous at Tue, 15 Oct 2024 05:25:35 UTC No. 16432405
first for symplectic geometry
Anonymous at Tue, 15 Oct 2024 05:57:32 UTC No. 16432427
hi can you guys help me, someone told me that 10 + 10 = 20 and 11 + 11 = 20 too and I don't get it
Anonymous at Tue, 15 Oct 2024 07:00:21 UTC No. 16432495
>>16432396
The wikipedia page for q-analogs doesn't make any sense, what are they really?
Anonymous at Tue, 15 Oct 2024 07:46:40 UTC No. 16432529
>>16432427
>11+11=20
It's pretty fucking obvious you just factor out the one here:
1(10+10)=10+10=20
Anonymous at Tue, 15 Oct 2024 09:05:30 UTC No. 16432596
Since I first learned of it the multi-armed bandit model seems to apply to every aspect of human life. Shouldn't it be more widely taught?
Anonymous at Tue, 15 Oct 2024 14:03:26 UTC No. 16432933
>>16432926
It's fundamentally a triangular tiling of the Euclidean plane, just with thick black outlines on the triangles in question
Anonymous at Tue, 15 Oct 2024 14:16:55 UTC No. 16432949
>>16432933
Thanks
Anonymous at Tue, 15 Oct 2024 15:28:28 UTC No. 16433045
>>16432396
Why is he sitting in that chair backwards?
Anonymous at Tue, 15 Oct 2024 15:29:28 UTC No. 16433048
>>16432427
Maybe he meant in base 11
Anonymous at Tue, 15 Oct 2024 16:03:50 UTC No. 16433097
>>16432427
Did you hear about the race between nineteen and twenty? 21
Anonymous at Tue, 15 Oct 2024 16:27:55 UTC No. 16433130
Any tensor algebra and calculus textbook that isn't proof based, I'm just a simple engineer
Anonymous at Tue, 15 Oct 2024 18:07:35 UTC No. 16433251
>>16433130
A brief on tensor analysis , Simmonds. Best of bests
Anonymous at Tue, 15 Oct 2024 18:33:51 UTC No. 16433275
>>16433097
That’s amazing, can’t believe I’ve never heard this one before
🗑️ Anonymous at Tue, 15 Oct 2024 23:42:48 UTC No. 16433628
>>16431947
what about when A is the y-axis in R^2 and f(t)=(1/t,t)
Anonymous at Wed, 16 Oct 2024 02:34:35 UTC No. 16433786
>>16431947 simple counterexample, let A be the y axis with the origin deleted, and let the image of f be the x axis, say f(t)=(t,0)
Anonymous at Wed, 16 Oct 2024 03:06:08 UTC No. 16433824
>>16433786
I didn’t see A has to be closed.
How about A is the y axis with (-1,1) removed, and f(t)=(0,sin(t)-1/t)
Anonymous at Wed, 16 Oct 2024 04:04:36 UTC No. 16433884
>>16433824
sin(t)-1/(t+2), rather
Anonymous at Wed, 16 Oct 2024 04:50:50 UTC No. 16433922
>>16433650
why do you fuckers keep posting this meme list even though not one of you actually ever read a book from it? it just seems like heavy cool math words that make you sooooo cool among the other kids
Anonymous at Wed, 16 Oct 2024 05:03:46 UTC No. 16433933
need to find the derivative of this, I did it but got a huge gigantic almost unreadable function
I got: 10(U)^9 * (e^pix + pie^pixcosx+2x+2cosx+x^2-sinx) / (1+cosx)^2
Is this answer even close to being right? mathway can't figure it out and neither can my calculator
Anonymous at Wed, 16 Oct 2024 05:06:28 UTC No. 16433936
>>16433933
it's not too hard to do by hand but wolframalpha was able to do it no problem
Anonymous at Wed, 16 Oct 2024 05:10:30 UTC No. 16433940
>>16433936
is my answer even close
Anonymous at Wed, 16 Oct 2024 05:12:40 UTC No. 16433941
>>16433940
I don't really feel like looking at your non-latex answer, try using wolfram alpha
Anonymous at Wed, 16 Oct 2024 05:13:47 UTC No. 16433942
>>16433940
here
Anonymous at Wed, 16 Oct 2024 05:22:38 UTC No. 16433952
>>16433940
have you considered asking wolframalpha if the answer you got is equal to the answer it got?
Anonymous at Wed, 16 Oct 2024 05:53:17 UTC No. 16433979
>>16433942
>>16433952
I tried wolframalpha but it doesn't explain a lot. So, I take the chain rule twice here right?
Anonymous at Wed, 16 Oct 2024 06:10:13 UTC No. 16433985
>>16433979
fine, i'll do it by hand for you
>>16433933
first, take the chain rule, yielding
[eqn]
10\left(\frac{e^{\pi x}+x^2}{1+\cos(x)}\right)^9\cdot\fr
[/eqn]
then the inner derivative is (by the quotient rule)
[eqn]
\frac{(1+\cos(x))\cdot(e^{\pi x}+x^2)' - (e^{\pi x}+x^2)\cdot(1+\cos(x))'}{(1+\cos(x
[/eqn]
the derivative of [math]e^{\pi x}+x^2[/math] is [math]\pi e^{\pi x} + 2x[/math]
the derivative of [math]1+\cos(x)[/math] is [math]-\sin(x)[/math]
now stop being a fucking faggot and do your homework properly and actually read the textbook
Anonymous at Wed, 16 Oct 2024 13:35:56 UTC No. 16434363
I studied Munkres up to compact spaces, should I keep going with Munkres or go back and learn analysis? Currently I only know basic analysis.
Anonymous at Wed, 16 Oct 2024 15:26:30 UTC No. 16434516
>>16434363
The first part of munkres is basic analysis, so I would keep going.
Anonymous at Wed, 16 Oct 2024 19:53:35 UTC No. 16435062
what books filtered you? chang's continuous model theory is mogging me brutally as we speak
Anonymous at Wed, 16 Oct 2024 19:54:43 UTC No. 16435064
Anonymous at Wed, 16 Oct 2024 20:04:45 UTC No. 16435090
>>16435062
Haven't gotten filtered yet but I've just started researching like a grown up. I'm excited to see what kind of knowledge can burden me to such a degree.
🗑️ Anonymous at Wed, 16 Oct 2024 20:30:57 UTC No. 16435132
after a hard day of studying algebraic geometry I save myself the right to jerk off to shnuckel bea getting shat on by some dudes
Anonymous at Wed, 16 Oct 2024 21:52:24 UTC No. 16435277
>>16435062
continuous model theory is hard; most brutal book for me was picrel
Anonymous at Thu, 17 Oct 2024 05:24:41 UTC No. 16435798
Let [math] A [/math] be a nonempty closed subset of [math] \mathbb{R}^n [/math].
Let [math] f : [0,\infty) \rightarrow \mathbb{R}^n [/math] be an injective continuous function.
Suppose [math] A \cap \mathrm{image}(f) = \emptyset [/math], and also suppose [math] \mathrm{lim}_{t\to\infty} f(t) [/math] does not exist.
Then is [math] A \cup \mathrm{image}(f) [/math] necessarily non-path-connected?
We note [math] A \cup \mathrm{image}(f) [/math] can be connected but non-path-connected; the topologists' sine curve is such an example.
Anonymous at Thu, 17 Oct 2024 18:35:36 UTC No. 16436720
>>16435798
I tried to outline a proof
Consider a path [math]g(s)[/math] which connects a point [math]a[/math] in [math]A[/math] to [math]f(0)[/math]. The curve must intersect the boundary of [math]A[/math] at some point [math]g(s_0)[/math].
Consider a small neighborhood [math]N_\epsilon(g(s_0))[/math]. The set [math]B_\epsilon=\{t:f(t)\in N_\epsilon(g(s_0))\}[/math] must be unbounded, or else by continuity there exists a number M such that [math]\sqrt{f(t)-g(s_0)}>M\ \forall t\in B[/math], in which case [math]N_M(g(s_0))\cap image(f)=\emptyset[/math]. Therefore there is a subsequence [math]t_1,t_2...[/math] such that [math]t_n\rightarrow\infty[/math] as [math]n\rightarrow\infty[/math] and [math]\lim_{n\rightarrow\infty}f(t_
Observe that [math]g(s)[/math] can be assumed injective on the image of [math]f[/math] WLOG because for every curve [math]g'[/math] there is a curve [math]g[/math] with the same image which is injective on the image of [math]f[/math] by deleting overlapping path components (image[math]f[/math] is in bijection with the nonnegative real line so it plays nicely). Then [math]g(s)[/math] restricted to image[math]f[/math] is equal to [math]f[/math] up to a change of variables.Then [math]\lim_{s\rightarrow s_0^+}g(s)=g(s_0)\implies\lim_{t\ri
Anonymous at Thu, 17 Oct 2024 18:45:06 UTC No. 16436748
>>16433650
is mastering all this even realistic
Anonymous at Thu, 17 Oct 2024 21:59:43 UTC No. 16437020
>>16436748
It's a good start
Anonymous at Fri, 18 Oct 2024 19:06:26 UTC No. 16438455
>>16432529
kek
captcha 0MG4H0
Anonymous at Sat, 19 Oct 2024 07:19:10 UTC No. 16439376
The proof of Fubini's Theorem is 5 pages in my textbook. And I really don't like the presentation (have not defined product measures, or even what the [math]dx, dy[/math] mean (entire time we've been working on [math]f:\mathbb{R}^d \to \mathbb{R} [/math]. Tempted to just skip the proof and move on.
Anonymous at Sat, 19 Oct 2024 08:33:28 UTC No. 16439405
good resources on logic and discrete math?
Anonymous at Sat, 19 Oct 2024 09:34:51 UTC No. 16439461
>>16439376
The proof should be quite standard "approximate, approximate, approximate".
You have sigma-finite measures, so you first show the result for finite measures and take a limit later. You have integrable functions, so you first show it for simple functions and take limits later. You have an indicator of a set in the product sigma algebra, so you first take an indicator of a measurable rectangle (which generates that sigma algebra). For these indicators it's obvious, so you're done.
Anonymous at Sat, 19 Oct 2024 09:48:03 UTC No. 16439472
>>16439461
Yeah, after staring at it and rewriting as I go for some time I almost have it through. So far everything in the text justifies it. I suppose I was just annoyed by what to me seemed like a lack of rigor for example in pic rel where we still don't have a precise definition of what [math]dx[/math] and [math]dy[/math] mean (I know they mean the respective measures, but not rigorously) so that just reads as nonsense and should instead be, based on what we have done and only what we have done
[math]\lim_{k \to \infty} \int_{\mathbb{R}^d} f_k = \int_{\mathbb{R}^d} \lim_{k\to\infty} f_k = \int_{\mathbb{R}^d} f[/math].
The text has not introduced the ideas of sigma-finite measure, or like I said a product measure. That is covered in the Abstract Measure Theory chapter much further along.
Nevertheless I am now able to follow the proof (almost done now), and like you said the strategy is just to show it is the case with indicator functions on measurable sets (starting with cubes/rectangles etc.), as then you can extend that to [math]L^1(\mathbb{R})[/math] and be finished. It's quite neat.
Anonymous at Sat, 19 Oct 2024 09:49:34 UTC No. 16439474
>>16439472
[math]L^1(\mathbb{R}^d)[/math]*
Anonymous at Sat, 19 Oct 2024 14:39:14 UTC No. 16439774
I don't need homework help, but I am legitimately curious.
I'll even change the equation from what I was given so you can't say I'm asking for answers:
Why does an equation like:
Solve this by grouping and factoring do this:
(y^3-whocares*y)=0
Become
y(y^2-whocares)=0
And then, for whatever reason,
y=0 is one of the answers before we solve for the other two with the y^2-whocares =0.
I don't understand the leap of logic that makes us split up the equation.
Anonymous at Sat, 19 Oct 2024 14:45:00 UTC No. 16439785
>>16439774
Mathematics is a lot like making love to a beautiful woman. When you see an equation you have to caress her, gently manipulating her parts until you have her in the correct position.
I hope this helped.
🗑️ Anonymous at Sat, 19 Oct 2024 14:52:57 UTC No. 16439798
>>16439785
Is this Barron Trump? Only HE knows what a beautiful woman is
Anonymous at Sat, 19 Oct 2024 15:02:33 UTC No. 16439807
>>16439774
That just follows from the distributive laws
[math]y(y^2 - Gy) = y*y^2 - y*Gy = y^3 - Gy^2[/math]
Anonymous at Sat, 19 Oct 2024 15:05:42 UTC No. 16439812
>>16439807
This follows from the fagittutive laws, that the post this links to is a fag, oh so help me god.
Anonymous at Sat, 19 Oct 2024 15:38:55 UTC No. 16439847
I'm reading the mcat preparation book next month when I get it and am reading some of them now that aren't math related
If it's maths (and physics!) that I get then I'm really worried. I'm bad with numbers
Have any of you read thr maths and physics mcat book? What's in it? How do I force myself to learn these subjects for the first time in my life (not even high school education)?
Anonymous at Sat, 19 Oct 2024 17:25:12 UTC No. 16440000
>>16439785
How does this metaphor extend to those equations where you can't solve for a given variable?
e.g. [math]z=we^w[/math]
How do I get a woman to give me a w
Anonymous at Sat, 19 Oct 2024 19:50:21 UTC No. 16440224
>>16440000
You use something that's made just for
her...in your example, it's the W function.
Then, she'll turn around and say "nice quads"
and "that was wonderful, babe".
Anonymous at Sun, 20 Oct 2024 09:35:22 UTC No. 16440956
Which book is best if I want to learn the mathematics behind AI?
Deep Learning by Ian Goodfellow, Yoshua Bengion, Aaron Courville
or
Mathematics for Machine Learning by Marc Peter Deisenroth, Aldo Faisal, Chen Soon Ong
Anonymous at Sun, 20 Oct 2024 11:43:59 UTC No. 16441048
any good book on elliptic functions that dont go yeah this function is defined as this theta ratio now deal with this hundred addition formulas (like whittaker watson or lawden)? a book that has great intuition and applications would be great
Anonymous at Sun, 20 Oct 2024 21:10:33 UTC No. 16441699
When you cut a polyhedron into a 2-dimensional net, the edges that are cut along always form a spanning tree of the graph of the polyhedron.
4D polytopes have 3D nets which are made by cutting along faces of the polytope. Is there a way to make some kind of higher-dimensional analog to graphs that follows this? I looked at the wikipedia page for hypergraphs and it isn't what I have in mind.
Anonymous at Mon, 21 Oct 2024 10:26:06 UTC No. 16442365
Posted this on /sqt/ with no response still, hope you guys can help me out. Any help is appreciated. Also, I lost pretty much 2 weeks worth of differential geometry class, so I gotta catch up on a lot of things. With that said, I could use some good extra references (textbooks, lecture notes, etc). Thanks in advance.
What really is the distribution of vector fields?
I thought it was just the span of the vector fields but apparently that definition if only locally correct (whatever that means).
And also, the definition I'm using for vector field is that a vector field on a manifold [math]M[/math] is a linear map [math]X:C^\infty(M)\to C^\infty(M)[/math] that is also a derivation. Following from this definition, how can I define a covector field? A covector field is supposed to be a differential form, right?
Anonymous at Mon, 21 Oct 2024 10:39:17 UTC No. 16442372
There's only so much bits and bobs to discover in all-tech before you know how to create any tech gayly. I am masterful in all-tech.
Anonymous at Mon, 21 Oct 2024 10:41:53 UTC No. 16442374
>>16442372
You DO overflow, same with art, similar with chemicals.
Anonymous at Mon, 21 Oct 2024 15:03:34 UTC No. 16442616
>>16442365
Covector field maps from the vector field to the base field
Anonymous at Mon, 21 Oct 2024 17:42:29 UTC No. 16442817
>>16436720
Thank you for your help anon. Sorry I didn't respond earlier, I was meeting some relatives
Anonymous at Mon, 21 Oct 2024 20:24:35 UTC No. 16443037
What is the point of regurgitating all of the theorems I learnt in real analysis in terms of exact sequences? Sure, fine, there's a kind of isomorphism between some theorems, what makes this particular kind of isomorphism so great?
Anonymous at Mon, 21 Oct 2024 21:42:40 UTC No. 16443150
>>16432396
Tell me about trigonometry /sci/ What exactly is it? What can I use it for? Can I create an LLM with it? Can I make Skynet with it? Can I make a Quantum Computer with it?
Anonymous at Mon, 21 Oct 2024 22:06:56 UTC No. 16443196
>>16443150
im sorry but i doubt you are gonna be able to do any of those if you dont know what trigonometry is anon
Anonymous at Tue, 22 Oct 2024 11:10:45 UTC No. 16443934
I read something about logic years ago and that said that all definable parts of third-order, fourth-order etc logic can be represented with arbitrarily long formulas in second-order logic. Is this true?
Anonymous at Tue, 22 Oct 2024 13:50:17 UTC No. 16444119
How sophisticated were you in your freshman year of uni? im a few months into the program and prior to that I pretty much know undergraduate real analysis in my 12th grade of highschool. Im also studying complex analysis on my own.
Anonymous at Tue, 22 Oct 2024 15:39:50 UTC No. 16444261
>>16444119
I knew pretty much nothing when I started. Only the abc-formula and stuff like that.
Anonymous at Tue, 22 Oct 2024 18:33:44 UTC No. 16444482
The infinitists got Wildberger
https://njwildberger.com/2024/10/22
https://www.youtube.com/@njwildberg
Anonymous at Tue, 22 Oct 2024 18:51:19 UTC No. 16444500
>>16444482
I stick every maths youtuber I actually enjoy on archive.org. It's only Eugene Khutoryansky and Taylor Dupuy so far but that's because everybody else is shit-tier.
Anonymous at Wed, 23 Oct 2024 21:42:09 UTC No. 16446396
>>16433130
You will do ze proofs and you will like it
Anonymous at Thu, 24 Oct 2024 22:10:37 UTC No. 16447912
Geometry bros: Lee or do Carmo?
Anonymous at Thu, 24 Oct 2024 23:25:15 UTC No. 16447981
>>16447912
EGA
Anonymous at Fri, 25 Oct 2024 00:08:22 UTC No. 16448031
>>16435062
I doubt there even exists one person (excluding Gromov) who really "got it".
Anonymous at Fri, 25 Oct 2024 00:18:27 UTC No. 16448057
Is there any way I can approach analysis by an abstract algebra standpoint? I mean, I take an analysis problem and solve it using abstract algebra techniques and concepts, and reformulate any analysis theorem and definition with an algebra based definition?
If so, do you guys know any textbook that go for this approach?
Anonymous at Fri, 25 Oct 2024 00:18:36 UTC No. 16448059
>>16447912
Not Lee, at least not as your first book. It's terrible for learning as motivation is basically void in his Manifolds and Differential Geometry book, he even admits it and points to his later works. Don't really recall much from do Carmo, though I used his books for some of my lectures. If you're new to DG, try O'Neill's Semi-Riemannian Geometry With Applications to Relativity instead. Sharpe, Lee, Brédon, etc. are for when you know the basics imo. Also check out the various different lecture notes that are floating on the web, some are really good.
Anonymous at Fri, 25 Oct 2024 01:15:40 UTC No. 16448143
>>16448057
The intersection of algebra and analysis is not simple. What you are describing can only happen in certain situations. Take algebraic topology or lie groups for example
Anonymous at Fri, 25 Oct 2024 03:06:28 UTC No. 16448287
>>16448057
ww.math.uni-bonn.de/people/scholze/
Anonymous at Fri, 25 Oct 2024 03:51:59 UTC No. 16448337
>>16447912
Uhhhhh, i don't love do Carmo. But i don't have a recommendation, sorry. So go with the other guy's rec
Anonymous at Fri, 25 Oct 2024 07:02:53 UTC No. 16448520
>>16448287
shieet i didn't know scholze posts on mg
Anonymous at Fri, 25 Oct 2024 11:31:38 UTC No. 16448743
Why can't progress be made on Navier-Stokes by just mapping it's phase space?
Anonymous at Fri, 25 Oct 2024 13:00:12 UTC No. 16448853
>>16448287
Hey anon, thanks.
The link's not working for me, could you provide another link or the textbook name?
Anonymous at Fri, 25 Oct 2024 13:04:05 UTC No. 16448861
>>16448853
www.math.uni-bonn.de/people/scholze
w was missing.
It was a joke anyway.
Anonymous at Fri, 25 Oct 2024 13:07:30 UTC No. 16448866
>>16448861
>w was missing.
lol, now I'm feeling dumb.
>It was a joke anyway.
That's alright. I appreciate it anyway, seems like a nice text. Thanks.
Anonymous at Fri, 25 Oct 2024 13:55:56 UTC No. 16448965
Is it weird to have math dreams? I just woke up in the middle of the night and my brain was trying to solve some equations or some shit, I was still like half-asleep,but I clearly remember like my brain was still active in that way, has happened to me a few times now, especially when I'm thinking about some concepts in detail and trying to visualize them
Anonymous at Fri, 25 Oct 2024 14:38:05 UTC No. 16449022
>>16448989
[math]A^{(X)}[/math] is the set of maps [math]f:X\to A[/math] for which there is a finite subset [math]X_0[/math] such that f is 0 outside of [math]X_0[/math]. It is "the free A-module with basis X". More precisely, if [math]x\in X[/math], denote [math]e_x\in A^{(X)}[/math] the function which is 1 in x and 0 elsewhere, then [math]\{e_x: x\in X\}[/math] is a basis for the A-module [math]A^{(X)}[/math].
Anonymous at Fri, 25 Oct 2024 15:07:23 UTC No. 16449045
>>16449022
In this case, can't I just write [math]\langle X\rangle[/math] or [math]\langle e_x\rangle_{x\in X}[/math] instead? What are the advantages of such convoluted and obnoxious notation that the author is using?
Anonymous at Fri, 25 Oct 2024 15:57:46 UTC No. 16449118
>>16449045
Where is the ring A in that notation?
Anonymous at Fri, 25 Oct 2024 16:01:48 UTC No. 16449123
>>16449118
Well, I could define e_x beforehand and use the second notation I presented. That way it's clear what ring I'm considering.
Anonymous at Fri, 25 Oct 2024 16:03:12 UTC No. 16449126
>>16449123
The notation is fine, I have no idea why you're being a baby about it lmao
Anonymous at Fri, 25 Oct 2024 16:06:45 UTC No. 16449127
>>16449126
Ok, I'll decide on the notation later.
Anyway, do I have to make any assumptions about [math]X[/math]? I mean, does it have to be finite, infinite and/or countable, must it have some algebraic structure, etc? Or any set will do?
Anonymous at Fri, 25 Oct 2024 16:50:39 UTC No. 16449160
>>16440956
Start by not reading a book about the mathematics surrounding machine learning. They're overcomplicated, meant to confuse you, and you will only find yourself increasingly more confused.
I'd highly encourage you to start by doing some linear algebra and Calculus III. My old university had all of the calculus texts avaliable online for free and you can find them here:
https://arts-sciences.und.edu/acade
Once you feel pretty comfortable with that, move into some differential equations and linear algebra. Focus on the matrix multiplication in linear and learn DE to enhance your problem-solving ability.
>>16435090
I like you. Researching like an adult but with the wonderment of a child.
>>16433130
>Proofs
Engineer here as well. I'm getting my ass mogged right now in linear algebra cause it's so theory heavy. Nevertheless, I feel like it will enhance my ability in the field to solve problems in some way. It might just be in our best interest to do the proofs and get 'em over with so if they come up in the future we'll know how to deal with 'em.
If any anons have some suggestions for writing proofs/getting good at them, I'd appreciate it
Anonymous at Fri, 25 Oct 2024 20:00:04 UTC No. 16449435
>>16449160
>Suggestions for getting good at proofs
Learn from the masters! Read Euclid, Dedekind, Edmund Landau and André Weil. Alternatively, i find it easier to grasp the proof concept following a good book/course on proof-based elementary linear algebra rather than any of those Transition books/courses
Anonymous at Fri, 25 Oct 2024 23:29:44 UTC No. 16449700
what's the difference between top unis and like low top 500 unis for undergrad and up
Anonymous at Fri, 25 Oct 2024 23:38:49 UTC No. 16449716
>>16449700
For undergrad it matters diddly squat, top unis are top because they have the best researches. They will look good in your CV though. After that it starts mattering more.
Anonymous at Fri, 25 Oct 2024 23:40:30 UTC No. 16449719
>>16449700
Mostly social. You develop a much better network of talented people at top unis because almost all the most talented people are clustered there. The courses are "the same" at McUniversities in the sense that they use mostly the same books with mostly the same syllabi but there are a lot of things you can't get from just reading a textbook that you need real people for
Anonymous at Sat, 26 Oct 2024 10:12:48 UTC No. 16450279
What is the most elementary solution to Basel problem? (The only one I know involves Fourier series.)
Anonymous at Sat, 26 Oct 2024 10:53:14 UTC No. 16450298
>>16450279
i think cauchy has one only using basic trigonometry and squeeze theorem
Anonymous at Sat, 26 Oct 2024 14:54:37 UTC No. 16450674
>>16450279
>>16450298
https://en.wikipedia.org/wiki/Basel
Anonymous at Sat, 26 Oct 2024 16:34:01 UTC No. 16450902
I'm trying to use the first isomorphism theorem to prove that there is a canonical isomorphism between [math]\mathcal{A}^k(V)[/math] and [math]\bigwedge^kV[/math], where [math]V[/math] is a vector space (over some field [math]K[/math]) and by [math]\mathcal{A}^k(V)[/math] I mean the subspace of [math]\bigotimes^kV[/math] consisting of skew-symmetric tensors.
First I'm defining [math]\bigwedge^kV[/math] by
[eqn]
\mathop{\bigwedge{}^k}V := \mathop{\bigotimes{}^k}V/J^k(V)
[/eqn]
where I define [math]J^k(V)\leq \bigotimes^kV[/math] to be the subspace spanned by all tensor products [math]\bigotimes_jv_j[/math] s.t. for at least two different indices [math]\lambda,\rho[/math] I have [math]v_\lambda = v_\rho[/math].
If I consider the map
[eqn]
\begin{align*}
\pi : \mathop{\bigotimes{}^k}V &\to \mathop{\bigotimes{}^k}V/J^k(V)\\
\tau &\mapsto \tau + J^k(V)
\end{align*}
[/eqn]
then I just need to find a suitable [math]\varphi \in \mathrm{Hom}(\bigotimes^kV,\mathcal
As an additional question, in my definition of [math]J^k(V)[/math], is it equivalent to my definition to say that [math]J^k(V)[/math] is spanned by all tensor products [math]\bigotimes_jv_j[/math] s.t. [math]\{v_j\}_j[/math] is a linearly dependent collection of vectors? And how would this definition and possible proof of canonical isomorphism hold for modules instead of vector spaces?
Any help is appreciated, thanks in advance.
Anonymous at Sat, 26 Oct 2024 20:27:08 UTC No. 16451164
>>16450279
[math]sin(\pi x)= \sum\limits_{n=0}^{\infty}{(\pi x)^{2n+1}(-1)^{n}\over (2n+1)!} =(\pi x)\prod\limits_{n=1}^{\infty}(1-{x^
Equate\ x^3\ terms.[/math]
Idk who first did it (probably euler).
Anonymous at Sat, 26 Oct 2024 21:36:02 UTC No. 16451270
>>16450902
Isn't phi the map that sends [math]v_1\otimes \cdots\otimes v_n\mapsto \frac{1}{n!}\sum_{\sigma \in S_n}(\text{sign}\sigma)v_{\sigma(1)
Then psi is just what sends the wedge product the summation on the right? I'm retarded though is my excuse if this completely wrong. I vaguely remember doing this. I think Kieth Conrad goes over this construction is his expository paper on Exterior Powers. He does it for modules, but if he does have some examples that cover the specific case of vector spaces.
Anonymous at Sat, 26 Oct 2024 22:22:13 UTC No. 16451321
Would you agree that calculus is the branch of mathematics that deals with limits?
Anonymous at Sat, 26 Oct 2024 22:46:56 UTC No. 16451339
>>16451321
Analysis is the study of continuous functions on the real numbers in the standard topology. Calculus is a compilation of the most useful techniques from analysis. It could be said that calculus is the study of limits insofar as every real number is the limit of a Cauchy sequence, however this seems too narrow since it doesn't yet mention functions.
Anonymous at Sun, 27 Oct 2024 01:56:13 UTC No. 16451528
>>16451321
I believe you're thinking of category theory.
Anonymous at Sun, 27 Oct 2024 04:24:48 UTC No. 16451691
I should have majored in physics or computer science, but now it’s too late to go back.
Anonymous at Sun, 27 Oct 2024 06:24:17 UTC No. 16451815
>>16451321
More precisely, calculus is the branch of mathematics that deals with the order topology of [math]\mathbb{R}[/math]
Anonymous at Sun, 27 Oct 2024 06:31:11 UTC No. 16451823
>>16451339
>Analysis is the study of continuous functions on the real numbers
How do you define your precious continuous functions without limits? How do you define derivative and definite integral without limits?
>>16451815
Nice.
Anonymous at Sun, 27 Oct 2024 06:55:34 UTC No. 16451832
>>16451691
Why would you think that? Flipping to physics might be a challenge but I know plenty of people who did physics or math that ended up in highly paid CS careers.
Anonymous at Sun, 27 Oct 2024 07:12:21 UTC No. 16451844
>>16451823
More gemerally, a function is said to be continuous if and only if the inverse image of any open set is open. The meaning of "open" is provided by the topology itself. The "limit definition" of continuity follows as a consequence
Anonymous at Sun, 27 Oct 2024 11:59:29 UTC No. 16451997
Anonymous at Sun, 27 Oct 2024 12:07:50 UTC No. 16452007
>>16432926
What software is this?
Anonymous at Sun, 27 Oct 2024 17:48:51 UTC No. 16452471
What metric on rationals, when completed gives a non-Archimedean field?
Anonymous at Sun, 27 Oct 2024 17:52:15 UTC No. 16452483
>>16452471
A metric induced by a non-Archimdean absolute value.
It is in the name anon.
Anonymous at Sun, 27 Oct 2024 17:53:21 UTC No. 16452488
>>16452471
p-adic metrics
Anonymous at Sun, 27 Oct 2024 21:55:47 UTC No. 16452845
My professor is having a complex analysis midterm on a Sunday evening. Do you think I should send him this email?
Dear Prof M,
Sunday is the sacred holy day of our lord and savior Jesus Christ. I consider complex analysis the work of the devil, and I am deeply offended by the insensitive timing of this midterm exam.
Best,
A
Anonymous at Sun, 27 Oct 2024 22:05:57 UTC No. 16452869
>>16452845
He will call you a heathen for not capitalizing Lord.
Anonymous at Mon, 28 Oct 2024 03:43:23 UTC No. 16453170
How do I show that in ZFC there is no S such that S = { {S} }? I can't just apply Russell's paradox here because of the nested membership.
Anonymous at Mon, 28 Oct 2024 03:44:17 UTC No. 16453171
>>16453170
Use Foundation
Anonymous at Mon, 28 Oct 2024 07:19:56 UTC No. 16453305
>>16448057
You might be interested in synthetic differential geometry, smooth loci, or non-standard analysis
Anonymous at Mon, 28 Oct 2024 15:14:44 UTC No. 16453597
I'm at a loss here, I've got [math] C([0,1])[/math] of continuous maps from [math] [0,1][/math] to [math] \mathbb{R}[/math] with the norm defined by [math] ||f||_{1}=\int_{0}^{1}|f(t)|dt[/mat
I have F as the composite of [math] k:C([0,1])\rightarrow \mathbb{R}, f\mapsto \int_{0}^{1}f(t)dt [/math] and [math] p:C([0,1])\rightarrow C([0,1]), f\mapsto |f|[/math]. [math] k [/math] is differentiable everywhere, but I'm having a hard time with the differentiability of [math] p [/math] which is continuous at the zero function but my intuition tells me it shouldn't be differentiable at that point. Any ideas?
Anonymous at Mon, 28 Oct 2024 15:19:29 UTC No. 16453603
>>16453597
F(f)=F(-f) would imply that the derivative is 0, but F(f) is of the order of f so it has no zero derivative.
Anonymous at Mon, 28 Oct 2024 15:49:01 UTC No. 16453632
What's the latest/largest found prime number in an unbroken prime sequence 2, 3, 5, 7, 11, 13, 17, 19... Both value and index
(I'm not talking about Mersenne prime)
Anonymous at Mon, 28 Oct 2024 16:22:14 UTC No. 16453676
>>16453632
Largest I can find doing an exhaustive check is: https://sweet.ua.pt/tos/goldbach.ht
But as that site and sites like https://t5k.org/lists/small/million
Anonymous at Mon, 28 Oct 2024 17:35:31 UTC No. 16453776
>>16436748
Of course not, this post you referenced was written by a literal schizo 100%
Anonymous at Mon, 28 Oct 2024 17:55:04 UTC No. 16453807
>>16453776
>this post you referenced was written by a literal schizo 100%
http://verbit.ru/
Anonymous at Mon, 28 Oct 2024 18:03:43 UTC No. 16453821
>>16453807
He вoзмoжнo вoт вce этo выyчить зa нecкoлькo лeт
Anonymous at Mon, 28 Oct 2024 19:03:16 UTC No. 16453880
>>16452007
I made it myself, I call it the image calculator. I haven't released it yet but I did release the similar "drawing calculator"
https://github.com/Photosounder/dra
Anonymous at Tue, 29 Oct 2024 00:26:15 UTC No. 16454223
What's an example of a non-measurable function? Say from the real line to itself
Anonymous at Tue, 29 Oct 2024 00:31:20 UTC No. 16454232
>>16454223
Take the characteristic function of a non-measurable set. (Example of non-measurable set: Take one point in [0,1] in each equivalent class in (R,+)/(Q,+), it is non-measurable because a union of countable translates cover R)
Anonymous at Tue, 29 Oct 2024 00:31:38 UTC No. 16454233
>>16454223
The indicator function of a non-measurable subset of the real line.
Anonymous at Tue, 29 Oct 2024 02:10:32 UTC No. 16454322
Can someone be smart without understanding math beyond hs level + statistics? You need statistics for academia and business so thats a given. Some of my professors are math idiots.
Anonymous at Tue, 29 Oct 2024 02:29:15 UTC No. 16454336
>>16454322
If they're getting money who's the idiot after all?
Anonymous at Tue, 29 Oct 2024 02:36:12 UTC No. 16454345
>>16454336
Lots of retards make money. It's called being a manager and having your parents pay for college. Or being a welder.
Anonymous at Tue, 29 Oct 2024 09:19:36 UTC No. 16454611
>>16451997
What does the "l" mean? I get it up to the sigma algebra part.
Anonymous at Tue, 29 Oct 2024 10:02:25 UTC No. 16454620
>>16454611
Lebesgue measure.
Anonymous at Tue, 29 Oct 2024 10:43:18 UTC No. 16454636
>>16454620
How do you pronounce Lebesgue? (Apparently it's [le-BEG] but I need a second opinion.)
Anonymous at Tue, 29 Oct 2024 10:52:21 UTC No. 16454651
>>16454636
I say lehbek
Anonymous at Tue, 29 Oct 2024 12:13:11 UTC No. 16454679
>>16440000
She is out of your league
badum tsss!
Anonymous at Tue, 29 Oct 2024 12:22:18 UTC No. 16454684
>>16454636
isnt it like lebeyg where bey is same as baby
Anonymous at Tue, 29 Oct 2024 14:13:29 UTC No. 16454766
>>16454636
https://voca.ro/1idgmywHBlgk
Anonymous at Tue, 29 Oct 2024 19:26:44 UTC No. 16455116
if [math]a=o(\rho^{n-1})[/math] where [math]\rho=\sqrt{h_1^2+h_2^2+\dots+
Anonymous at Tue, 29 Oct 2024 19:31:55 UTC No. 16455127
>>16455116
[math]|h_i|\leq \rho[/math]
Anonymous at Tue, 29 Oct 2024 21:36:45 UTC No. 16455289
Are there transformations of some vector or covector, or tensor that preserve entropy, i.e stretching, shrinking, or padding an image while having the relative/entropy btn the original and new unchanged?
Anonymous at Wed, 30 Oct 2024 01:40:26 UTC No. 16455466
Is it possible to prove 'the complement of a set is a proper class' constructively and just using separation?
I can think of ways to prove it constructively using either (binary) union + separation or just induction (to use that \in is asymmetric).
I've also managed to prove it just using separation but relying on classical logic.
Anonymous at Wed, 30 Oct 2024 02:23:35 UTC No. 16455498
Are there any non-trivial solutions to [math]\int f = f'[/math]?
Anonymous at Wed, 30 Oct 2024 02:28:13 UTC No. 16455500
>>16455498
f = f'' ?
Anonymous at Wed, 30 Oct 2024 02:37:34 UTC No. 16455508
>>16455498
f=e^x
Anonymous at Wed, 30 Oct 2024 18:23:47 UTC No. 16456078
>>16455498
f(x)=cosh(x)
Anonymous at Wed, 30 Oct 2024 18:44:12 UTC No. 16456114
how do I learn boolean logic very well? any good books
Anonymous at Wed, 30 Oct 2024 19:04:55 UTC No. 16456125
>>16456114
Halmos
Anonymous at Thu, 31 Oct 2024 04:24:43 UTC No. 16456647
Just flunked a math test. It only dropped my overall grade by 2% though. I'll still pass. I've been in a mental health spiral for the last 2-3 weeks. Totally incapacitated. I feel like I might be coming out of it though. I have another one in 2 days for a different class. Please pray for me.
Anonymous at Thu, 31 Oct 2024 06:13:39 UTC No. 16456700
>>16456647
god luck dawg it's not big deal don't stress it so much school is not the only important thing
Anonymous at Thu, 31 Oct 2024 07:42:11 UTC No. 16456722
>>16456647
>tfw no dumb cat-twink bf to teach math
Anonymous at Fri, 1 Nov 2024 07:17:15 UTC No. 16457876
what do you think about this
>>>/g/103047606
Anonymous at Fri, 1 Nov 2024 07:48:36 UTC No. 16457890
is the seifart van kampen theorem the most powerful theorem in topology?
>>16455498
there are no solutions to this because the newtonian integral of a function returns a family of functions, while the derivative of a function returns a single function. if you want to "fix" one value for the constant + C and so have the integral output be a single function, you can rewrite the expression as f = f''.
<e^x,e^-x> is a basis for the vector space of functions that solve f = f''
Anonymous at Fri, 1 Nov 2024 08:37:47 UTC No. 16457925
>>16457876
% literally means "per cent", ie divide by 100.
Anonymous at Fri, 1 Nov 2024 10:38:48 UTC No. 16457990
>>16457876
that board is hopeless, even more than this
Anonymous at Fri, 1 Nov 2024 11:29:19 UTC No. 16458030
>>16457876
this is retarded, why would you ever interpret 1% - 1% as "1% minus 1% of 1%"? That just completely changes the meaning of subtraction for now reason.
Anonymous at Fri, 1 Nov 2024 12:13:10 UTC No. 16458060
>>16458030
1/100-1/100 vs 1/100 - (1/100) (1/100)
yea I don't know how they came up with that
Anonymous at Fri, 1 Nov 2024 15:06:56 UTC No. 16458259
Can someone prove to me by definition that if a function's f second order derivatives are differentiable at a point P0 then the first order derivatives are twice differentiable at that point? i.e they are differentiable at some neighborhood around that point.
Anonymous at Fri, 1 Nov 2024 15:31:55 UTC No. 16458286
>>16458259
What order of derivative do you get when you differentiate a first-order derivative?
Anonymous at Fri, 1 Nov 2024 15:39:16 UTC No. 16458294
>>16458286
forgot to add it's a two-variable function.
Anonymous at Fri, 1 Nov 2024 15:43:54 UTC No. 16458298
halp >>16458275
Anonymous at Fri, 1 Nov 2024 16:18:48 UTC No. 16458345
>>16458259
If you're saying f'' is differentiable at P0, then that's the same as saying d/dx(f'') = f''' = d^2/dx^2(f') exists for some neighborhood around P0.
Anonymous at Fri, 1 Nov 2024 16:19:56 UTC No. 16458347
>>16458294
if it's differentiable, that means >>16458345 is true for all partials
Anonymous at Fri, 1 Nov 2024 16:39:46 UTC No. 16458367
>>16458345
No? If a function f''(x,y) is differentiable at a point that means all third order partial derivatives of the function exist at that point, not around the neighborhood. You can't claim there are third order partial derivative functions defined around that neighborhood.
Anonymous at Fri, 1 Nov 2024 18:19:17 UTC No. 16458588
>>16458367
Yes, you are right. >>16458259 assumes that too, so that is incorrect. Differentiability does not imply neighborhood differentiability. Still is differentiable at the point though, which is the main point of >>16458259
Anonymous at Fri, 1 Nov 2024 19:04:21 UTC No. 16458702
>>16458588
>>16458259
assumes that the first order derivatives are twice differentiable at P0,twice differentiability means differentiability at some neighborhood of that point. Meaning partial derivative functions are defined around that neighborhood.
Anonymous at Fri, 1 Nov 2024 19:14:53 UTC No. 16458721
>>16458702
Haven't heard of that or a proof, but it doesn't seem wrong. If true, I'd believe it. Nice to know
Anonymous at Fri, 1 Nov 2024 19:28:35 UTC No. 16458737
>>16458721
That was just the definition of n times differentiability where I studied: a function is n times differentiable at a point if it is n-1 times times differentiable at a neighborhood around that point and that the n-1 order partial derivative functions are differentiable at that point.
🗑️ Anonymous at Fri, 1 Nov 2024 20:14:45 UTC No. 16458795
>>16458737
Saying f is differentiable at point c is not the same as talking about a function differentiable over it's domain. Differentiability does not imply neighborhood differentiable. If the existence of the 2nd derivative at point c implies a 1st derivative on a neighborhood, I wouldn't know that. But it isn't always true if you only have the 1st derivative exists at c.
Anonymous at Fri, 1 Nov 2024 20:18:59 UTC No. 16458800
>>16458737
Saying f is differentiable at point c is not the same as talking about a function differentiable over it's domain. Differentiability does not imply neighborhood differentiable. If the existence of the 2nd derivative at point c implies a 1st derivative on a neighborhood, I wouldn't know that. But it isn't always true if you only have the 1st derivative exists at c. A 1st derivative at c doesn't even imply continuity in a neighborhood of c, so I have my doubts if a higher derivative statement is true.
Anonymous at Fri, 1 Nov 2024 20:30:14 UTC No. 16458816
>>16458800
The domain can be a bunch of separate points. Scratch that first line. Continuous differentiable doesn't work either.
Guessing if that's how the book is defining differentiability, the author is maybe saying he's just sticking to simple functions where it's also true in a neighborhood. Saw one author defined it as a basically a smooth function because it special cases were a hindrance to the overall gist of what he was teaching. Idk
Anonymous at Fri, 1 Nov 2024 21:03:14 UTC No. 16458862
Are there any rings that aren't a subgroup of a group?
Anonymous at Fri, 1 Nov 2024 21:11:30 UTC No. 16458866
Just got an A on my discreet math midterm. Feeling cute. Might bust fat ropes to femboys in lingerie later. Thinking I want to build a computer that can do basic math so I have something cool on my resume. Gonna have to learn how computers actually work though.
Anonymous at Fri, 1 Nov 2024 21:28:30 UTC No. 16458874
>>16458862
Every ring is a group by definition.
Anonymous at Fri, 1 Nov 2024 22:19:58 UTC No. 16458943
>>16458866
You mean "discrete" not "discreet"
Anonymous at Sat, 2 Nov 2024 01:32:54 UTC No. 16459142
>>16458737
Could be a case of a -> b, but b+c -> a, where differentiable at P0 implies the partials at P0, while the partials at P0 and a neighborhood imply differentiable at P0 (without the neighborhood part, it isn't always true).
Anonymous at Sun, 3 Nov 2024 15:59:21 UTC No. 16461040
I have a projective group, say [math]PSL_2(\mathbb{C})[/math]. In the context of representation theory, do I always have to consider its projective representations in [math]\mathrm{PGL}(V)[/math] or are there regular representations [math]\mathrm{GL}(V)[/math] as well?
Anonymous at Sun, 3 Nov 2024 16:22:08 UTC No. 16461075
>>16461040
Consider them in what context? It depends on what you're doing.
Anonymous at Sun, 3 Nov 2024 17:09:36 UTC No. 16461114
>>16461075
What I’m asking is are there faithful representations of PSL(2,C) in GL(n,C) or do those only exist in PGL(n,C)? The reason I’m asking this is because PSL(2,C) is isomorphic to the Lorentz group of special relativity. Projective representations give rise to fermions. I’m trying to understand whether the need for projective representations is intrinsic to PSL(2,C) or is it a completely separate requirement due to quantum mechanics, where we work with a projective Hilbert space.
Anonymous at Sun, 3 Nov 2024 17:40:37 UTC No. 16461149
>>16461114
I believe you are mistaken: a "projective representation" of SL(2,C) is *not* the same thing as a representation of PSL(2,C). Rather, projective representations of a Lie group G generally correspond to representations of the universal covering group of G.
Note SL(2,C) is itself the universal covering group (in this case a double cover) of the identity component SO^+ (3,1) of the Lorentz group. See https://en.wikipedia.org/wiki/Repre
Anonymous at Sun, 3 Nov 2024 17:49:14 UTC No. 16461160
>>16461149
>a "projective representation" of SL(2,C) is *not* the same thing as a representation of PSL(2,C)
A projective representation of G is any group homomorphism from G to PGL(V)=GL(V)/F, where F is the field V is defined over. I am talking about projective and regular representations of PSL(2,C) itself, ie group homomorphisms PSL(2,C)->PGL_n(C)/C and PSL(2,C)->GL_n(C) respectively.
>Rather, projective representations of a Lie group G generally correspond to representations of the universal covering group of G.
>Note SL(2,C) is itself the universal covering group (in this case a double cover)
I am perfectly aware that SL(2,C) is the double cover of PSL(2,C). And I'm also aware of this correspondence. What I'm asking is, are representations of PSL(2,C) on GL(n,C) faithful or not? Why does one need to consider projective representations when a group is not simply connected?
Anonymous at Sun, 3 Nov 2024 18:01:17 UTC No. 16461169
>>16461149
Sorry ignore what I said here >>16461149 ,
I was experiencing temporary retardation
>Why does one need to consider projective representations when a group is not simply connected?
As I understand it, this has to do with physics: the symmetry group G (in this case the Lorentz group) of a quantum-mechanical system S must act on the Hilbert space of states H of S, but since two Hilbert space vectors differing only by a nonzero scalar multiple are considered the same physical state, we therefore consider projective representations of G on H, instead of only "true" representations.
>are representations of PSL(2,C) on GL(n,C) faithful or not?
I assume you mean, do there exist any nontrivial (true, not projective) faithful representations (smooth, complex, finite-dimensional) of PSL(2,C)? Tbh idk
Anonymous at Sun, 3 Nov 2024 18:04:11 UTC No. 16461171
>>16461169
Meant to reply to >>16461160
Anonymous at Sun, 3 Nov 2024 18:06:53 UTC No. 16461173
>>16461169
>I assume you mean, do there exist any nontrivial (true, not projective) faithful representations (smooth, complex, finite-dimensional) of PSL(2,C)? Tbh idk
Sorry actually yes, there absolutely exists a faithful (and smooth, complex or real, finite-dimensional) represenation of PSL(2,C), which arises from viewing PSL(2,C) isomorphically as SO^+ (3,1) which clearly has a faithful representation on R^3 (which extends to C^3 in an obvious way)
Anonymous at Sun, 3 Nov 2024 18:11:34 UTC No. 16461174
>>16461169
>this has to do with physics
Ok, so this was my question. Thank you. I didn't know if the fact that we consider a projective Hilbert space as opposed to a regular one was just a postulate to deal with probability interpretations or was it something intrinsic to the group structure of the Lorentz group.
>>16461173
>which clearly has a faithful representation on R^3 (which extends to C^3 in an obvious way)
4-vectors and bispinors?
Anonymous at Sun, 3 Nov 2024 18:22:52 UTC No. 16461186
>>16461174
Oops I meant SO^+(3,1) clearly has a faithful representation on R^4 (or R^{3|1} rather), not on R^3.
This is since SO^+(3,1) is already a subgroup of GL(3,1;R).
In this context, R^4 should be viewed as a space of 4-vectors, not bispinors. (Note the spinor representations are only projective, not "true", representations of PSL(2,C) ≅ SO^+(3,1) , unlike the 4-vector representation.)
Anonymous at Sun, 3 Nov 2024 19:03:30 UTC No. 16461254
According to my book, a point on [math] S [/math] defined at [math] p(t) = \big( \ u^1(t), u^2(t) \ \big) [/math] with an inner product defined with [math] g_{ij} [/math] has a "tangent vector" [math] w_p = \sum \tfrac{du^i}{dt} \tfrac{\partial}{\partial u^i} [/math] so that
[math] \Vert w \Vert^2_p = \sum_{ij} \tfrac{du^i}{dt} \tfrac{du^j}{dt} \left( \tfrac{\partial}{\partial u^i} \cdot \tfrac{\partial}{\partial u^j} \right) = \sum_{ij} \tfrac{du^i}{dt} \tfrac{du^j}{dt} \ g_{ij} [/math].
So for example, the inner product with [math] g_{11} = 1,\ g_{12} = 0 = g_{21},\ g_{22} = e^{2u^1} [/math] is the hyperbolic plane.
But a "tangent vector" is supposed to be for a function [math] f [/math] that is differentiable at [math] p [/math], so the directional derivative is [math] \tfrac{d}{dt}f(p(t)) = \sum \tfrac{du^i}{dt} \tfrac{\partial f}{\partial u^i} [/math], and the magnitude-square of this is
[math] \sum_{ij} \tfrac{du^i}{dt} \tfrac{du^j}{dt} \left( \tfrac{\partial f}{\partial u^i} \cdot \tfrac{\partial f}{\partial u^j} \right) [/math].
What does it even mean to say something like [math] g_{11} = 1,\ g_{12} = 0 = g_{21},\ g_{22} = e^{2u^1} [/math]?
[math] \left( \tfrac{\partial f}{\partial u^i} \cdot \tfrac{\partial f}{\partial u^j} \right) [/math] makes sense in that you're dotting two vectors together, but what the hell does [math] g_{ij} = \left( \tfrac{\partial}{\partial u^i} \cdot \tfrac{\partial}{\partial u^j} \right) [/math] mean?
Anonymous at Sun, 3 Nov 2024 19:06:07 UTC No. 16461258
>>16461254
and if it's an operator on f, how can you even assure it's always equal to 1 for every function f?
Anonymous at Sun, 3 Nov 2024 19:15:32 UTC No. 16461267
>>16461186
>Note the spinor representations are only projective, not "true", representations of PSL(2,C) ≅ SO^+(3,1) , unlike the 4-vector representation
You had already said this and I already answered. You sound like you’re high on meth, no offence.
🗑️ Anonymous at Sun, 3 Nov 2024 21:14:28 UTC No. 16461401
>>16461267
No idea what you're talking about, but to spell it out for you:
I said the (proper orthochronous) Lorentz group, PSL(2,C) or SO^+(3,1), has a faithful representation on R^{3|1} .
You asked if this is a bispinor representation.
I said no, because indeed, the spinor representation is not a representation of PSL(2,C) , only a projective representation.
If you had been following along, you would have already known this.
Anonymous at Sun, 3 Nov 2024 21:17:57 UTC No. 16461403
>>16461267
To spell it out for you:
I said the (proper orthochronous) Lorentz group, PSL(2,C) or SO^+(3,1), has a faithful representation on R^{3|1} .
You asked if this is a bispinor representation.
I said no, because indeed, the spinor representation is not a representation of PSL(2,C) , only a projective representation.
If you had been following along, you would have already known this, and you wouldn't have needed to ask your redundant question.
So, why weren't you following along?
Anonymous at Sun, 3 Nov 2024 21:20:13 UTC No. 16461407
>>16461254
>>16461258
You should read up on the techniques of differentiable/smooth manifolds and especially tensor calculus. This should answer everything you're wondering about
🗑️ Anonymous at Sun, 3 Nov 2024 21:53:26 UTC No. 16461443
I saw something on reddit saying the binary digit expansion of any prime integer has to have all 1's and no 0's; is this actually true? If yes how can we show this?
Anonymous at Sun, 3 Nov 2024 23:27:06 UTC No. 16461532
>>16461254
I think you're running into a common confusion between a metric [math]g[/math] and its expression in coordinates. The idea is essentially a generalization of how a matrix can be used to represent a linear transformation on a vector space, but the transformation will have a different matrix representation depending on your choice of basis.
[math]g[/math] is essentially a vastly generalized version of the usual dot product (specifically it's a 2-tensor). Here, you're using dot product notation, but it is not the usual dot product. It may be clearer to write it as [math]g(X,Y)[/math] to see explicitly that it's not the same thing. Now you can represent the usual dot product using a matrix:
[eqn]a\cdot b=a^{T}Ib=\begin{pmatrix}
a_{1} & a_{2}\\
\end{pmatrix}\begin{bmatrix}
1 & 0\\
0 & 1\\
\end{bmatrix}\begin{pmatrix}
b_{1}\\
b_{2}\\
\end{pmatrix}=a_{1}b_{1}+a_{2}b_{2}
[/eqn]
Similarly, you can represent the metric [math]g[/math] using a matrix, but the entries of that matrix depend on how you've chosen your coordinates. Once you've chosen coordinates [math]\{u^{1},u^{2}\}[/math], you get a corresponding set of basis vectors [math]\{\frac{\partial}{\partial u^{1}},\frac{\partial}{\partial u^{2}}[/math], and you can plug them into your general metric function to figure out what the entries of the corresponding matrix have to be. In your case, what you get is
[eqn]g(\frac{\partial}{\partial u^{i}},\frac{\partial}{\partial u^{j}}=\frac{\partial}{\partial u^{i}}^{T}(g_{mn})\frac{\partial}{\
g_[11} & g_{12}\\
g_{21} & g_{22}\\
\end{bmatrix}\frac{\partial}{\parti
Then you use the relation [math]g(\frac{\partial}{\partial u^{i}},\frac{\partial}{\partial u^{j}}=\delta_{ij}[/math] to solve for [math]g_{ij}[/math].
Anonymous at Sun, 3 Nov 2024 23:28:26 UTC No. 16461537
>>16461532
Typo, the part that didn't render should be
[eqn]g(\frac{\partial}{\partial u^{i}},\frac{\partial}{\partial u^{j}}=\frac{\partial}{\partial u^{i}}^{T}(g_{mn})\frac{\partial}{\
Anonymous at Mon, 4 Nov 2024 06:58:51 UTC No. 16461860
I need some wrinkled brain energy.
Wolfram says this integral is ~0.394294 but I can't verify it.
integrate (tanh x)^2*exp(-x^2/2)/sqrt(2*pi) from -inf to inf
but I get 1 if I let u = (tanh x)^2 and 1 - some nasty laplacian if I let u = exp(-x^2/2)/sqrt(2*pi)
For some context, X is a standard normal random variable, and I'm calculating E[tanh(X)^2].
Anonymous at Mon, 4 Nov 2024 07:10:22 UTC No. 16461870
>>16432396
Hey math tards, why do you keep saying shit like "blackholes destroy information", WTF kind of negro bullshit is that? You do know the universe isnt made of numbers right? Just because you stare at numbers all day doesnt make that shit real. Yall literally need to go touch some grass.
Anonymous at Mon, 4 Nov 2024 11:17:53 UTC No. 16461967
>>16461870
wow bro you really BTFOd that strawman, he will never recover from that one
Anonymous at Mon, 4 Nov 2024 13:57:11 UTC No. 16462096
What's a good way to include implicit 3d-plots in latex? Nothing too fucky, just polynomial equations. For 2d stuff I simply use tikz. Or should I use something like sage to create svg plots instead? God I hate plotting so much it's ureal.
Anonymous at Mon, 4 Nov 2024 14:10:30 UTC No. 16462109
math retard here, do i need to do the algebra 2 and trig modules on khan academy if i study precalc using stewart's book? i just need the stat, probability and high school geometry modules right?
Anonymous at Mon, 4 Nov 2024 17:39:09 UTC No. 16462391
>>16462096
Just don't. Use python like everyone else.
Anonymous at Mon, 4 Nov 2024 17:48:34 UTC No. 16462414
>>16462391
What do you mean? Like python -> vector graphics -> figure in latex?
Anonymous at Mon, 4 Nov 2024 17:53:03 UTC No. 16462418
>>16462414
Yes.
Anonymous at Mon, 4 Nov 2024 18:47:08 UTC No. 16462492
Can any math anons help me understand this definition?
Specifically, I’m not sure what is meant in the definition of the ∞-moment of Q, when it says sup | supp Q | :
As I understand it, supp Q is a particular subset of R, and | supp Q | is its measure (using the measure "associated to" the distribution Q, if I'm not mistaken?), which is just a constant number. But then, what is meant by sup | supp Q | ? What is the supremum being taken over?
Anonymous at Mon, 4 Nov 2024 18:52:42 UTC No. 16462496
>>16461532
i see, thx for the comment
Anonymous at Mon, 4 Nov 2024 18:56:49 UTC No. 16462502
>>16462492
>What is the supremum being taken over?
The support. It's just retarded notation. Supp means support and sup means supremum.
Anonymous at Mon, 4 Nov 2024 19:31:20 UTC No. 16462551
>>16462502
I see, so the supremum of the absolute values of points in the support. Thanks anon
Anonymous at Mon, 4 Nov 2024 20:37:13 UTC No. 16462645
Let [math] \psi : [0,\infty) \rightarrow [0,\infty) [/math] be a continuous function such that [math] \lim_{u\to\infty} \psi(u) = \infty [/math].
Then does there necessarily exist a constant [math] u_0 \geq 0 [/math] such that for all [math] u\geq u_0 [/math] we have [math] \psi(u) \geq \psi(u_0) [/math]? If yes, would anyone have an idea of how this can be proved?
Anonymous at Mon, 4 Nov 2024 20:49:51 UTC No. 16462647
>>16462395
consider the equation
[math]z=x^y[/math]
there's a function to get x from z and y
and there's a different function to get y from z and x. these are called left and right inverse functions:
roots are left inverses of exponents
[math]x=\sqrt[y]z[/math]
logs are right inverses of exponents
[math]y=\log_x{z}[/math]
Anonymous at Mon, 4 Nov 2024 20:58:20 UTC No. 16462656
>>16462645
If [math]\psi(u)[/math] goes to infinity, then for any positive number [math]\psi(u_{0})[/math], there exists some [math]U>0[/math] such that [math]\psi(u)>\psi(u_{0})[/math] for all [math]u>U[/math]. Take the infimum over all such [math]U[/math].
Anonymous at Mon, 4 Nov 2024 22:01:16 UTC No. 16462746
>>16462656
Actually someone showed me a counterexample to >>16462645
which is the function [math] \psi(x) = x/2 + \sin(x) [/math].
Anonymous at Mon, 4 Nov 2024 22:05:49 UTC No. 16462749
>>16462746
That function isn't a counterexample to your claim. I think you've misstated what you were trying to say.
Anonymous at Mon, 4 Nov 2024 22:21:20 UTC No. 16462767
>>16462749
I don't follow what you're saying, how is >>16462746 not a counterexample to >>16462645 ?
Anonymous at Mon, 4 Nov 2024 22:23:41 UTC No. 16462769
>>16462767
he/she doesn't know what they're saying
Anonymous at Mon, 4 Nov 2024 22:27:44 UTC No. 16462774
>>16462767
Why wouldn't any value of [math]\frac{2\pi}{3} + 2\pi{n}[/math] work for [math]u_0[/math]?
Anonymous at Mon, 4 Nov 2024 22:28:44 UTC No. 16462775
>>16462774
*[math]-\frac{2\pi}{3}+2\pi{n}[/mat
the local minima of the function, is the point
Anonymous at Mon, 4 Nov 2024 22:29:43 UTC No. 16462776
>>16462767
Take [math]u_0[/math] equal to any minimum of the sine function for example [math]u_0 = \frac{3 \pi}{2} [/math] then [math]\psi(u) > \psi(u_0)[/math] for all [math]u > u_0[/math].
🗑️ Anonymous at Mon, 4 Nov 2024 22:32:05 UTC No. 16462778
>>16462776
Nigga what are you talking about. Try graphing the function f(x) = x/2 + sin(x) and see what it looks like
Anonymous at Mon, 4 Nov 2024 22:37:22 UTC No. 16462782
>>16462767
[math]\psi(0)<\psi(u)[/math] for all [math]u>0[/math]. I think what you meant to ask is whether there necessarily exists some [math]u_{0}[/math] such that for all [math]a\geq b>u_{0}[/math], we get [math]\psi(a)\geq\psi(b)[/math], i.e. such that [math]\psi[/math] is increasing on [math](u_{0},\infty)[/math]. The answer to that question is no by your counterexample. Check your wording carefully, they are not the same question.
Anonymous at Mon, 4 Nov 2024 22:54:16 UTC No. 16462803
>>16462782
Sorry I see what you're saying now. I was being dumb, my bad
Anonymous at Tue, 5 Nov 2024 14:18:05 UTC No. 16463550
I am relearning math from scratch, almost done with Algebra 2 while working on basic proof writing and set theory on the side using Hammacks book.
Is Lang/Murrow Geometry overkill for Geometry fundamentals?
Or could I benefit from working through this before I move on to Trigonometry, Analysis and LA?
Pic related is the toc for the book.
Anonymous at Tue, 5 Nov 2024 19:09:18 UTC No. 16463959
>>16463550
Try the foundational approach to geometry (Synthetic [1], analytic [2]). If it doesn't suit your taste, first five chapters of Lang are good enough before trigonometry and single variable calculus, and you can read the rest Lang on the side.
References [1] Euclidean Plane and its Relatives; A Minimalist Introduction -
Anton Petrunin https://arxiv.org/abs/1302.1630
[2] Course of Analytical Geometry -
Ruslan Sharipov https://arxiv.org/abs/1111.6521
Anonymous at Wed, 6 Nov 2024 09:49:37 UTC No. 16464767
say you have a bunch of functions that send some domain set to a codomain set, you take the union of these codomain sets and then assume the resulting set is complete in the sense that this union is a finite subcovering, is there a way of showing that if you remove some random set, given some domain set whose codomain is not in the finite subcovering, there exists a function that would make the finite subcovering complete again if you applied it to this domain set, there exists a codomain set that makes it complete given the domain set, i am not talking about completeness in the sense of open/closed set here, complete in the sense that some maximal sense of information is preserved in the union of these sets
Anonymous at Wed, 6 Nov 2024 10:24:22 UTC No. 16464796
>>16464767
I guess what i am asking is akin to, if you take the hausdorff measure btn the complete union and the incomplete union--formed by removing a random codomain--is there a way of showing that this gap can be filled by the codomain of a function if the domain in question exists in some neighbourhood of every other domain that contributed towards the complete union?
Anonymous at Fri, 8 Nov 2024 12:33:40 UTC No. 16467060
>learn math
>forget it if I dont use it
what the fuck is the point? I feel scammed.
Anonymous at Fri, 8 Nov 2024 15:42:06 UTC No. 16467246
>>16467043
I would read "Understanding Analysis" by Abbott.
Anonymous at Fri, 8 Nov 2024 16:07:13 UTC No. 16467268
>>16467246
Is that book easy?
Anonymous at Fri, 8 Nov 2024 16:53:54 UTC No. 16467305
>>16467268
Easy enough. You should try all the exercises, in case that wasn't obvious. They're great.
Anonymous at Fri, 8 Nov 2024 16:55:31 UTC No. 16467307
>>16467305
Can it really be done in 2 weeks?
Anonymous at Fri, 8 Nov 2024 19:06:37 UTC No. 16467444
>>16467307
I don't know. If it doesn't work out you can rest easy that anything else probably also wouldn't have worked.
Anonymous at Fri, 8 Nov 2024 21:39:58 UTC No. 16467599
>>16467043
This is a short summary (without exercises) of those topics
>Number of Pages 147
https://old.maa.org/press/maa-revie
This is a compilation of ~5000 problems
https://archive.org/details/Demidov
Anonymous at Fri, 8 Nov 2024 21:46:07 UTC No. 16467606
>>16467599
I think the MAA guide is doable in two weeks. But i dont know which set of problems can help you in such short time. Maybe Schaum’s Outline of Theory and Problems of Real Variables/Advanced Calculus? I was wrong, there a less than 4000 problems in that edition, and only 1781 concern the required topics.
Anonymous at Fri, 8 Nov 2024 22:27:59 UTC No. 16467640
Can someone help me with this retarded problem?
assuming that you have a 50% win rate strategy, if with 10 pip stop loss and 10 pip take profit you are 22% more likely to lose then what percent is it with 10 pip take profit and 50 pip stop loss are you more likely to lose?
Anonymous at Sat, 9 Nov 2024 04:35:46 UTC No. 16467971
>>16463550
Lang and Murrow is very gentle and gradual. Not overkill, easier than George birkhoff which is also a good choice
Anonymous at Sat, 9 Nov 2024 05:17:18 UTC No. 16468010
I'm trying to learn Linear Logic but I have no extensive experience with anything more obscure than first-order classical logic, and I'm completely unfamiliar with reading Sequence Calculus.
The main thing I don't understand is exponentials. I know that ! restores the structural rules to a term "on the left" and that ? restores them to a term "on the right", and that they are duals of each other. What I don't get is
>what does a ? on the left do, besides get negated and turned into a !
>what does a ! on the right do, besides get negated and turned into a ?
>why not just have one exponential that is applied everywhere and that is it's own dual?
Anonymous at Sat, 9 Nov 2024 05:39:21 UTC No. 16468018
>>16468010
How far removed are you from this?
Anonymous at Sat, 9 Nov 2024 06:29:36 UTC No. 16468049
Is anyone looking for a math job? This is my first recruiting job and my bf is smart and frequents 4chan
https://app.outlier.ai/expert/oppor
Anonymous at Sat, 9 Nov 2024 07:39:36 UTC No. 16468072
>>16468049
No sweeitie, at this time /sci/ is a math board: 300k starting or ~$150 USD per hour and perks.
Anonymous at Sat, 9 Nov 2024 17:55:57 UTC No. 16468454
good video resources of calculus (not sure if 1 or 2 i'm not anglo)? solving problems in particular, i think i understand theory but i wouldn't mind to work on that too
Anonymous at Sat, 9 Nov 2024 21:07:26 UTC No. 16468719
>>16468018
Maybe not too far? I can mostly follow individual paragraphs and formulae, but I don't know what a lattice is or what a continuous function means outside the context of analysis.
Anonymous at Sat, 9 Nov 2024 21:52:58 UTC No. 16468774
>>16468049
I already work there
Anonymous at Sun, 10 Nov 2024 07:47:24 UTC No. 16469395
How good is Davenport's Number Theory for beginner graduate that has some kind of background in number theory (Apostol).
Anonymous at Sun, 10 Nov 2024 11:46:49 UTC No. 16469497
I can't for the life of me prove part (b) that
[eqn]
F^N_{n,m} = \sup_{ \frac{1}{n}\leq|h| \leq \frac{1}{m}} \frac{J_N(x+h) - J_N(x)}{h} = \sup_{ \frac{1}{n}\leq |h| \leq \frac{1}{m}} \frac{\sum\limits_{k =1 }^N\alpha _k j_k(x+h) - \sum\limits_{k =1 }^N\alpha _k j_k(x)}{h}
[/eqn]
is measureable. The [math]\sup[/math] is taken over an uncountable set, and so we can't apply standard measureability rules. I thought about taking the [math]\sup[/math] over the rationals intersected with that set, but the function [math] J [/math] (where [math]J[/math] is the jump function of a increasing bounded function [math] F [/math] defined on a closed interval [math] [a,b] [/math]), and so it can have countably many discontinuities, so I think this interferes with the [math]\sup [/math]. Or can you? What if [math] F [/math] is discontinuous at every [math]\mathbb{Q} \cap [a,b] [/math], then if the [math] \sup [/math] is on some "real" number no [math] \mathbb{Q} \cap [a,b] [/math] ever reaches it. I think? Maybe we can leverage the fact that [math] J_F(x) [/math] is an increasing function and the bounds of our set [math] \left[ \frac{1}{n}, \frac{1}{m} \right][/math] are rational?
Any ideas? I have never posted on math stack exchange and I'm considering it now, but I fear being exposed as a total brainlet.
I fucking hate [math] \mathbb{R} [/math]
Anonymous at Sun, 10 Nov 2024 11:47:58 UTC No. 16469499
>>16469497
Here is the defining of [math] J(x) [/math]
Anonymous at Sun, 10 Nov 2024 12:11:16 UTC No. 16469520
Is there a simple way to derive a mean from cumulative percentages? I must admit I'm a bit of a retard, so all the explanations that involve probability spaces and integrals will go through one ear and out the other.
If any anon actually replies, I'll give an example of what I mean here:
Say a group of students can get one of 5 scores on a test, where 5 is the highest that the brightest kids can get, and 1 is the lowest that everyone gets. The students get the following cumulative percentages:
100% of students get a 1 or higher
70% get a 2 or higher
40% get a 3 or higher
10% get a 4 or higher
5% get a 5
The question would then be what is the average score of the group. An approximate method to derive it is fine as well, I'm just too stupid for any advanced shit. Any solutions using matrix algebra would also be good though.
Anonymous at Sun, 10 Nov 2024 12:19:35 UTC No. 16469528
>>16469520
100% for a 1 or higher
70% got a 2 or higher
therefore 100-70 = 30% got a 1 or higher but NOT a 2 or higher i.e. just got a 1
Okay so 30% got 1
Then 70% got a 2 or higher and 40% got a 3 or higher, therefore 70 - 40 = 30% got a 2 or higher but NOT a 3 or higher i.e. just 2.
Then 40% got a 3 or higher, and 10% got a 4 or higher, therefore 40-10 = 30% got a 3 or higher but NOT a 4 or higher
Out of the remaining 10%, we see that 5% got a 5, so logically the remaining 5% must have gotten a 4.
1 - 30%
2 - 30%
3 - 30%
4 - 5%
5 - 5%
Anonymous at Sun, 10 Nov 2024 12:20:43 UTC No. 16469530
Is there a way to represent some function F that takes a set of coordinates in R^2 and outputs the radius of the minimum bounding circle of those points as the problem of finding a minimum of a multivariable polynomial?
Anonymous at Sun, 10 Nov 2024 12:49:57 UTC No. 16469549
>>16469528
Can you get a mean from that?
Anonymous at Sun, 10 Nov 2024 12:54:35 UTC No. 16469552
>>16469549
Yes, then just do a weighted sum
0.3*1 + 0.3*2 + 0.3*3 + 0.05*4 + 0.05*5
= 2.25
Anonymous at Sun, 10 Nov 2024 12:58:53 UTC No. 16469554
>>16469552
Thanks.
Anonymous at Sun, 10 Nov 2024 14:27:46 UTC No. 16469610
>>16469530
There is a nonconstructive theorem I've seen before that states that every linear programming problem is equivalent to minimizing some polynomial. Identifying which polynomail is the hard part.
Anonymous at Sun, 10 Nov 2024 19:31:17 UTC No. 16469871
>>16463959
Thanks anon I will take a look
>>16467971
I mostly meant overkill in terms of amount of geometry since most Algebra books seem to include some Geometry basics anyway but much shorter. So I was wondering whether it's even worth to work through a full specific book on it.
When it comes to interest alone I would just do it, but unfortunately time is limited and there is so much to do.
Anonymous at Sun, 10 Nov 2024 19:37:47 UTC No. 16469874
What are generating functions? I saw them mentioned a few times now as a "useful trick/tool" that makes life easier.
What can they be used for and is it worth going out of my to learn them or are they something you learn anyway when working on the topics they are applied to?
Anonymous at Mon, 11 Nov 2024 04:20:20 UTC No. 16470408
>>16469395
>How good is X book
zbmath and MAA book reviews will sometimes help
https://old.maa.org/press/maa-revie
Anonymous at Mon, 11 Nov 2024 04:30:25 UTC No. 16470418
>>16469874
That's kind of a big topic you won't get a good answer to in a single post. Maybe a good introduction is watching something like: https://youtu.be/4dyytPboqvE
Anonymous at Mon, 11 Nov 2024 06:21:02 UTC No. 16470474
>>16469497
You're on the right track.
If I'm reading it right, F, hence J_F, is nondecreasing (?).
Then, it has countably many discontinuities, implying it is continuous on a dense subset of [a,b].
This subset will have a further countable dense subset, over which you can take your sup.
🗑️ Anonymous at Mon, 11 Nov 2024 07:18:26 UTC No. 16470503
>>16469530
If you plot all your 2-point coordinates, you can enclose all of em in a rectangle with minimum length and minimum height, where the min and max x and y values on your list are on the sides of the rectangle. This means at most you'll have 4 lines to check for maximum diameter, whose endpoints are the following: highest left and lowest right points, lowest left and highest right point, leftmost top and rightmost bottom, and rightmost top and leftmost bottom.
To find the maximum of two numbers, you can do Max(a,b) = (a+b)/2 + sqrt( (a-b)^2 ) / 2. To find the maximum of an unordered list of numbers, apply this to maximum function to all of em, so like Max(a, Max(b, Max(c...))). Finding the minimum is similar. Now we got a minimum and maximum function for a list of numbers.
Suppose you want to compare (x1, y1) and (x2, y2) to find the lowest right point. So you want to see which has the largest x value, and you want to keep the y value of the largest x-value, but if both x's are the same you want to keep the lowest y value. This is the same as the y value is equal to
>x1 = x2 -> Min(y1, y2) * 1 + Keep_the_Y_of_max_X * 0
>x1 != x2 -> Min(y1, y2) * 0 + Keep_the_Y_of_max_X * 1
or
>y = Min(y1, y2) * A + Keep_the_Y_of_max_X * B
I'm gonna use something dumb and say that (x-x)/(x-x) = 1.
Keep_the_Y_of_max_X can be found to be y1 * [ x1 - Min(x1,x2) ] / [x1-x2] + y2 * [ x2 - Min(x2,x1) ] / [x2-x1] or anything equivalent.
A is [ Max(x1,x2) - Max(x1,x2) ] / [ Max(x1,x2) - Min(x1,x2) ], and B = 1-A
You can then do something similar to find the conjugate point, the highest left point. You can then find the distance between these two points. You can then repeat this for the other 3 sets of points and find the lengths of those lines. Then you can find the Max of these 4 distances to get the max diameter.
Anonymous at Mon, 11 Nov 2024 12:28:46 UTC No. 16470676
>>16469874
Read the first pages of generatingfunctionology for more details
They're simply a nice way of packaging sequences together
Anonymous at Mon, 11 Nov 2024 18:45:20 UTC No. 16471125
>>16469530
To find the maximum of two numbers, you can do Max(a,b) = (a+b)/2 + sqrt( (a-b)^2 ) / 2. To find the maximum of an unordered list of numbers, apply this to maximum function to all of em, so like Max(a, Max(b, Max(c...))). Finding the minimum is similar, and you can subtract Max - Min to get the largest difference of X or Y values.
You can then rotate the X or Y coordinate about any point (so choose the origin), so like X' = X cos t - Y sin t, and use these values in Max - Min, now a function of the angle or rotation. To not use trig functions, maybe you can replace cos t with T that ranges from -1 to 1 and sin t = + sqrt(1^2 - T^2) and - sqrt(1^2 - T^2). Find the maximum of this function to get the maximum diameter of your circle. Finding the minimum of the negative of this function does the same thing. Maybe try out some other ways to try and force it into a polynomial
Anonymous at Mon, 11 Nov 2024 18:47:09 UTC No. 16471130
>>16471125
You don't need the sin t = - sqrt... btw, you only need to use the positive
Anonymous at Mon, 11 Nov 2024 22:29:03 UTC No. 16471331
Big project due tonight in math but I'm just not going to do it. I have no will to do anything right now. I'm so fucking mentally destroyed.
Anonymous at Tue, 12 Nov 2024 00:45:58 UTC No. 16471453
>>16471331
Slow down, take some personal time, but dont let your current opportunities go to waste. Use your personal time to focus in what really motivates you and reach out for help
Anonymous at Tue, 12 Nov 2024 06:23:41 UTC No. 16471708
>>16432396
I had a brain injury that damaged the left hemisphere of my brain. I dont remember how to do math. As in even basic arthmatic (minus simple counting with my hands). Like I look at these symbols here, and it might as well be hierogliphics. I remember I was going to take multivariable calculus in 2019,I rememebr taking tests, using a scientific calculator/etc, but I literally dont know how to do it anymore. I don't even remeber what the symbols mean (like "e" which i think was how much a number moves over, trying to remember is like trying to rememeber a void, like the page wont load).
Thank god there are computers to do all this shit.
Anonymous at Tue, 12 Nov 2024 09:31:42 UTC No. 16471798
Help me Anons, is there anything special in sequence 63, 65, 107, 109, 197?
Anonymous at Tue, 12 Nov 2024 10:54:30 UTC No. 16471832
>>16471798
Is it possible that there are any terms between these in the sequence that you're unaware of? OEIS doesn't turn anything up when looking up that exact permutation
Anonymous at Tue, 12 Nov 2024 11:14:47 UTC No. 16471844
>>16471834
[math]X_{n \land N}[/math] is [math]X_n[/math] until it enters the interval [math] [-K, -K - M][/math] and then becomes constant.
lowercase sage !!IaxlA1xvEP/ at Tue, 12 Nov 2024 11:24:43 UTC No. 16471850
>>16471798
Sorta:
63 and 65 are mass numbers of copper..
107 and 109 are mass numbers of silver.
197 is mass number of gold.
See also: https://en.wikipedia.org/wiki/Group
Anonymous at Tue, 12 Nov 2024 13:30:35 UTC No. 16471925
>>16447912
Kobayashi Nomizu
Anonymous at Tue, 12 Nov 2024 14:59:26 UTC No. 16471984
>>16471850
nta, thats some good work sherlock
Anonymous at Tue, 12 Nov 2024 16:52:48 UTC No. 16472075
>>16471844
Sankyu.
Anonymous at Tue, 12 Nov 2024 17:02:58 UTC No. 16472084
>>16471708
>there are computers to do all this shit
That's the punchline. Computers suck at high level mathematics. High level math requires making new definitions and inventing ideas which computers can't do
Anonymous at Tue, 12 Nov 2024 18:19:07 UTC No. 16472195
A seeminlgy boring equation in my calc book:
[eqn]
\sqrt{x+h} - \sqrt{x} / h
=
( \sqrt{x+h} - \sqrt{x} ) ( \sqrt{ x +h } + \sqrt{ x } ) / h ( \sqrt{ x +h } + \sqrt{ x } )
[/eqn]
However, the root function has +- outputs, meaning the fraction changes.
Anonymous at Tue, 12 Nov 2024 18:22:17 UTC No. 16472200
A seeminlgy boring equation in my calc book:
[eqn]
\frac{\sqrt{x+h} - \sqrt{x} }{ h }
=
\frac{ ( \sqrt{x+h} - \sqrt{x} ) ( \sqrt{ x +h } + \sqrt{ x } )} { h ( \sqrt{ x +h } + \sqrt{ x } )}
[/eqn]
However, the root function has +- outputs, meaning the fraction changes.
Anonymous at Tue, 12 Nov 2024 18:32:07 UTC No. 16472210
>>16472200
I don't understand what you mean?
[math]\sqrt{x}[/math] only has strictly nonnegative outputs
Anonymous at Tue, 12 Nov 2024 18:33:22 UTC No. 16472212
>>16472210
OK I will remember that
Anonymous at Wed, 13 Nov 2024 05:42:24 UTC No. 16472998
I wanted to sim a hotdog eating contest that my DnD campaign might be doing. When I asked the college stats tutor, her brain fried. When I asked my electrical engineer father, his brain fried. Could I get someone to check my math?
The idea is that there are 4 players eating hotdogs one at a time. Each time they eat a hotdog they have to roll a 20 sided dice.
On the first hotdog if they roll a 1 they weren't able to finish it.
On the second hotdog, if they rolled a 1 or a 2 they weren't able to finish it
On the third, 1-3 = fail
and so on until all 4 have failed a roll. I'm really not sure if I'm up my own ass with the solution to this.
https://docs.google.com/spreadsheet
Anonymous at Wed, 13 Nov 2024 06:13:16 UTC No. 16473012
mathlet here
how do i efficiently calculate n iterations of [math]x=x+a(b-x)[/math]?
i noticed if a is 0.5, then it's just [math]x=\frac{x(1-0.5^n)}{1-0.5}[/m
>you're retarded
i know
Anonymous at Wed, 13 Nov 2024 06:15:14 UTC No. 16473014
>>16473012
[math]x=\frac{x(1-0.5^n)}{1-0.5}[/m
Anonymous at Wed, 13 Nov 2024 06:20:08 UTC No. 16473019
Anyone else struggling with college algebra? There are some parts I just dont understand without being reminded how to solve using them.
The concept of nth,real zeros, and linear factorization thereom are some concepts I'm trying to grasp.
I feel like I should understand these eventually because college algebra is fundamental for adv math.
Anonymous at Wed, 13 Nov 2024 06:21:38 UTC No. 16473022
>>16473019
Disregard the nth degree portion, that's just the highest exponent in the equation iirc
Anonymous at Wed, 13 Nov 2024 07:22:49 UTC No. 16473085
>>16473012
>>16473014
nvm i figured it out
Anonymous at Wed, 13 Nov 2024 08:44:14 UTC No. 16473137
>>16473019
>>16473022
Note that the maximum degree of your polynomial
is 3, meaning that you can have 3 REAL roots or less.
Because a COMPLEX number is involved as a root,
this changes to having three roots EXACTLY. This
is the point of the linear factorization theorem since
complex numbers are involved.
Notice, then, you're one root short of what's required.
That means your third root MUST be the conjugate
of [math] 9+4i [/math], to get rid of stray imaginary
numbers. Now the polynomial looks like this:
[math] a(x+2)(x-9-4i)(x-9+4i) [/math]. The coefficient
"a" can be determined by evaluating the polynomial
at [math] f(1)=240 [/math]. You will have real
coefficients guaranteed once you expand everything.
Anonymous at Wed, 13 Nov 2024 10:11:32 UTC No. 16473196
Why is differentia geometry considered a branch of geometry as opposed to analysis but algebraic geometry a branch of algebra?
Anonymous at Wed, 13 Nov 2024 12:44:28 UTC No. 16473292
Anonymous at Wed, 13 Nov 2024 13:36:32 UTC No. 16473334
Any ideas on how to prove that if [math] f [/math] is of bounded variation and absolutely continuous on [math] [a,b] [/math] and
[eqn]
\int_a^b f'(x) g(x) \, dx = -\int_a^b f(x) g'(x) \, dx.
[/eqn]
holds for every absolutely continuous [math] g [/math] in [math] [a,b] [/math] with [math] g(a) = g(b) = 0 [/math], then [math] f [/math] is absolutely continuous?
I have already proved that for two absolutely continuous functions, the equality holds.
>>16470474
I think first I would have to prove there exists a countable subset of that other set, such that for every [math] x [/math] in the domain [math] x + r_n \notin \{x_i\}_{i=N_x}^\infty[/math], i.e. not in the set of discontinuities for some infinite tail of that countable subset of the continuous domain [math]\{ r_n \}_{n=1}^{\infty} [/math]
Anonymous at Wed, 13 Nov 2024 13:37:47 UTC No. 16473335
>>16473334
>>16470474
Oh, and I found a solution, though I have not gone over the top result. But I do know the second answer must be wrong (because of the countable discontinuities thing).
https://math.stackexchange.com/ques
Anonymous at Wed, 13 Nov 2024 14:15:50 UTC No. 16473370
>>16473137
I copied this down on my notes but there are some key words here that I'll need to study again so thanks.
Anonymous at Wed, 13 Nov 2024 14:17:04 UTC No. 16473371
Seeing all these symbols itt make my head hurt its like foreign scripture.
Anonymous at Wed, 13 Nov 2024 14:38:12 UTC No. 16473383
>>16473335
You're right, the second answer is wrong, but it's salvageable. They (understandably) take [math]\mathbb Q[/math] as dense subset, but (as you remarked >>16469497
here) it's possible the discontinuities ``line up'' with [math]\mathbb Q[/math].
There's nothing special about the rational numbers in this aspect; and any other countable dense set works just as well.
To be somewhat more specific, you have the interval [math][a,b][/math], and a set of discontinuity points [math]D[/math] of [math]J[/math].
Since [math]D[/math] is countable, [math][a,b]\setminus D[/math] is still dense in [math][a,b][/math].
All you need is a *countable* dense subset [math]A[/math] of [math][a,b]\setminus D[/math], which will then also be a countable dense subset of [math][a,b][/math].
Here is a simple construction I just googled: https://math.stackexchange.com/a/46
Anonymous at Wed, 13 Nov 2024 14:41:57 UTC No. 16473389
>>16473334
>if [math]f[/math] is of bounded variation and absolutely continuous
>[...]
>then [math]f[/math] is absolutely continuous
>I have already proved that for two absolutely continuous functions
What do you mean?
Anonymous at Wed, 13 Nov 2024 14:42:34 UTC No. 16473390
>>16473389
Oh damn, I meant to say the first one is continuous and of bounded variation on [a,b]
Anonymous at Wed, 13 Nov 2024 14:47:09 UTC No. 16473397
>>16473390
>>16473334
I'll update on my thought process.
I have not proved this yet, but there exists something called a Lebesgue decomposition (pic rel).
If we then let [math] F = F_{AC} + F_C + F_J [/math] meeting that definition, then since it's continuous we will have [math]F_{AC} + F_C [/math] where [math]F'_C = 0 [/math] a.e.
If we plug this back into
[eqn]
\int_a^b f'(x) g(x) \, dx = -\int_a^b f(x) g'(x) \, dx.
[/eqn]
we eventually get
[eqn]
\int_{[a,b]} g' F_C = 0
[/eqn]
[math] \forall g\in AC[a,b] [/math], where [math] g(a) = g(b) = 0 [/math].
Not solved yet I don't think, but I feel like there's something here
Anonymous at Wed, 13 Nov 2024 18:12:43 UTC No. 16473631
>>16473334
>>16473390
Maybe I am missing something, but if both [math]f[/math] and [math]g[/math] are continuous and of bounded variation, this is enough for the classic integration by parts formula [math]\int_a^b f d g + \int_a^b g d f = f(b)g(b)-f(a)g(a)[/math] to hold.
See for example https://staff.fnwi.uva.nl/p.j.c.spr
This then implies your formula certainly for AC [math]g[/math] being zero at the endpoints (because AC implies bounded variation), so this shouldn't be of any use in ``upgrading'' the condition that [math]f[/math] is BV into [math]f[/math] being AC.
Anonymous at Wed, 13 Nov 2024 19:02:40 UTC No. 16473711
>>16473631
Here I am working with the Lebesgue Integral.
I have already proved that
[eqn]
\int_a^b f g' + \int_a^b f'g = f(b)g(b)-f(a)g(a)
[/eqn]
For absolutely continuous [math] f,g [/math] in [math]AC[a,b] [/math]. Indeed, this follows just from the fact [math] fg \in AC[a,b] [/math], and [math] (fg)' = f'g + fg' [/math], and the Fundamental Theorem of Lebesgue Integral Calculus.
The result I am trying to prove now that that if [math] f \in BV[a,b] \cap C[a,b] [/math], i.e. [math] f [/math] is continuous and of bounded variation on [math] [a,b] [/math], and we have that [math] \forall g \in AC[a,b] [/math] with [math] g(a) = g(b) = 0 [/math]:
[eqn]
\int_{[a,b]} f'(x)g(x)dx = -\int_{[a,b]} f(x)g'(x)dx
[/eqn]
THEN [math] f [/math] must be absolutely continuous (and not just continuous + bounded variation).
Anonymous at Wed, 13 Nov 2024 20:11:19 UTC No. 16473794
Was having trouble with calculus, this video really helped me out so I'll share it here: https://youtu.be/WMrpZzB37xM
Anonymous at Wed, 13 Nov 2024 20:43:56 UTC No. 16473823
Are linear type systems isomorphic with (full) Linear Logic, or only with (Full?) Intuitionistic Linear Logic like normal type systems?
Anonymous at Wed, 13 Nov 2024 21:08:31 UTC No. 16473847
Hey Frens,
Can someone please help?
Find an example of a sequence of functions [math] \{f_n\}_{n=1}^{\infty} [/math] that converges to zero pointwise a.e. on a set [math] E [/math] and yet [math] \{f_n\}_{n=1}^{\infty} [/math] does not converge to zero in measure.
Need to prove the example has these properties.
Anonymous at Wed, 13 Nov 2024 21:16:15 UTC No. 16473856
Let [math] f [/math] be a continuous integrable function on [math] \mathbb{R} [/math], and let [math] g [/math] be a bounded measurable function on [math] \mathbb{R} [/math]. For each [math] n \in \mathbb{N} [/math], let [math] f_n(x) = f(x + 1/n) [/math].
(a) Prove that [math] \{g \cdot f_n\}_{n=1}^{\infty} [/math] is uniformly integrable.
(b) Prove that [math] \{g \cdot f_n\}_{n=1}^{\infty} [/math] is tight.
(c) Prove that
[math]
\lim_{n \to \infty} \int_{\mathbb{R}} g \cdot f_n = \int_{\mathbb{R}} g \cdot f.
[/math]
Anonymous at Wed, 13 Nov 2024 22:32:52 UTC No. 16473947
>>16473798
Sauce?
Anonymous at Wed, 13 Nov 2024 22:50:39 UTC No. 16473979
The sun has fallen out of orbit and is on a collision course with Earth!
You have one shot to prevent this catastrophe:
You must feed the AI controlling Earth's Magic Machine as accurate a graph as possible for calibration purposes.
At the eleventh hour, your command center sends your outpost this piece of shit graph and then abruptly ceases communications as they enter the doomsday bunker.
You have no idea what any of the units are, why the data set is so sparse and full of gaps, or why the graph is so fucked up in general.
You only know that both high x values and high y values increase instability.
To survive, you must:
>Submit a graph with areas filled in corresponding to the three color keys: the "peninsula of stability", the edge case region surrounding it, and the unstable region outside of the peninsula.
>Post and explain any math or theories you used to get your result, or explain if you just winged it.
Anonymous at Wed, 13 Nov 2024 22:50:47 UTC No. 16473980
>>16473847
Take the sequence of of functions that is zero everywhere except for [0,1/n] and equal to n on it.
Anonymous at Wed, 13 Nov 2024 23:31:16 UTC No. 16474018
>>16473980
Ty Fren!
How is this proof?
Define the sequence of functions f_n:
[math] f_n(x) = n \cdot \chi_{[0, \frac{1}{n}]}(x) [/math]
Step 1: Pointwise convergence a.e.
For any fixed x ≠ 0, there exists some N such that x > 1/n for all n ≥ N.
This means [math] f_n(x) = 0 [/math] for all large enough n, so [math] \lim_{n \to \infty} f_n(x) = 0 [/math].
If x = 0, then [math] f_n(0) = n [/math] for all n, but m({0}) = 0, so it doesn't matter.
Conclusion: [math] f_n \to 0 [/math] pointwise a.e. on [0, 1].
Step 2: Does f_n converge to 0 in measure? Nope.
To converge in measure, for every [math] \epsilon > 0 [/math], we need
[math] m(\{x : |f_n(x)| \geq \epsilon\}) \to 0 [/math] as [math] n \to \infty [/math].
Take any [math] \epsilon > 0 [/math].
Look at the set [math] E_n = \{ x \in [0, 1] : |f_n(x)| \geq \epsilon \} [/math].
For [math] x \in [0, \frac{1}{n}] [/math], we have [math] f_n(x) = n [/math], so [math] |f_n(x)| \geq \epsilon [/math].
This means [math] E_n = [0, \frac{1}{n}] [/math].
Measure of [math] E_n [/math] is [math] m(E_n) = \frac{1}{n} [/math].
As [math] n \to \infty [/math], [math] m(E_n) = \frac{1}{n} \to 0 [/math], but never actually hits 0.
So [math] f_n [/math] does NOT converge to 0 in measure.
Anonymous at Wed, 13 Nov 2024 23:41:29 UTC No. 16474029
>>16474018
>>16473980
Hold up Frens
Sequence {f_n} converges to a function f in measure on set E if:
[math] \lim_{n \to \infty} m(\{ x \in E : |f_n(x) - f(x)| > \eta \}) = 0 [/math]
for all [math] \eta > 0 [/math].
In our case, we want to show convergence to zero:
[math] \lim_{n \to \infty} m(\{ x \in E : |f_n(x)| > \eta \}) = 0 [/math]
for all [math] \eta > 0 [/math].
Now let’s apply this to our example:
1. Define the sequence:
[math] f_n(x) = n \cdot \chi_{[0, \frac{1}{n}]}(x) [/math]
(basically, f_n is 0 everywhere except on [0, 1/n], where it’s equal to n)
2. Check convergence in measure:
For any fixed [math] \eta > 0 [/math]:
- If [math] n > \eta [/math], then [math] f_n(x) > \eta [/math] on the interval [0, 1/n].
- So, [math] m(\{ x : |f_n(x)| > \eta \}) = \frac{1}{n} [/math].
3. Take the limit:
[math] \lim_{n \to \infty} m(\{ x : |f_n(x)| > \eta \}) = \lim_{n \to \infty} \frac{1}{n} = 0 [/math].
Conclusion:
According to the definition, [math] f_n \to 0 [/math] in measure since
[math] \lim_{n \to \infty} m(\{ x \in [0, 1] : |f_n(x)| > \eta \}) = 0 [/math]
for every [math] \eta > 0 [/math].
TL;DR: Sequence converges to zero pointwise a.e. AND in measure.
Anonymous at Wed, 13 Nov 2024 23:53:52 UTC No. 16474048
Can anyone help with this proof?
Suppose {f_n} is a sequence of measurable functions on E that converges in measure to f.
Further, suppose {f_n} also converges in measure to g.
Prove that f = g a.e.
Definition for convergence in measure:
A sequence {f_n} converges to a function f in measure on a set E if, for every [math] \eta > 0 [/math],
[math] \lim_{n \to \infty} m(\{ x \in E : |f_n(x) - f(x)| > \eta \}) = 0 [/math].
In this case, you need to show that if {f_n} converges in measure to both f and g, then [math] f = g [/math] almost everywhere.
Anonymous at Thu, 14 Nov 2024 00:25:52 UTC No. 16474097
>>16474048
Two lemmas before the proof.
Borel-Cantelli Lemma:
Let [math] \{E_k\}_{k=1}^{\infty} [/math] be a countable collection of measurable sets for which [math] \sum_{k=1}^{\infty} m(E_k) < \infty [/math].
Then, almost all [math] x \in \mathbb{R} [/math] belong to at most finitely many of the sets [math] E_k [/math].
Lemma:
If [math] \{f_n\} [/math] converges to [math] f [/math] in measure on [math] E [/math], then there exists a subsequence [math] \{f_{n_k}\} [/math] that converges pointwise a.e. on [math] E [/math].
Proof of the Lemma:
By the definition of convergence in measure, for each [math] k \in \mathbb{N} [/math], there exists a strictly increasing sequence of natural numbers [math] n_k [/math] such that:
[math] m(\{ x \in E : |f_j(x) - f(x)| > \frac{1}{k} \}) < \frac{1}{2^k} [/math]
for all [math] j \geq n_k [/math].
Now, define for each index [math] k [/math]:
[math] E_k = \{ x \in E : |f_{n_k}(x) - f(x)| > \frac{1}{k} \} [/math].
Then, [math] m(E_k) < \frac{1}{2^k} [/math].
Since [math] \sum_{k=1}^{\infty} m(E_k) < \infty [/math], the Borel-Cantelli Lemma applies.
This tells us that almost all [math] x \in E [/math] belong to at most finitely many sets [math] E_k [/math].
Thus, [math] f_{n_k}(x) \to f(x) [/math] pointwise a.e. on [math] E [/math].
Q.E.D.
Anonymous at Thu, 14 Nov 2024 00:26:58 UTC No. 16474099
>>16474048
>>16474097
Problem:
Suppose [math] \{f_n\} [/math] is a sequence of measurable functions on [math] E [/math] that converges in measure to [math] f [/math].
Further, suppose [math] \{f_n\} [/math] also converges in measure to [math] g [/math].
Prove that [math] f = g [/math] a.e.
Step 1: Assume [math] \{f_n\} \to f [/math] in measure and [math] \{f_n\} \to g [/math] in measure.
By the lemma above, there exists a subsequence [math] \{f_{n_k}\} [/math] that converges pointwise a.e. to [math] f [/math].
Similarly, since [math] \{f_n\} \to g [/math] in measure, the same subsequence [math] \{f_{n_k}\} [/math] converges pointwise a.e. to [math] g [/math].
Thus, [math] f(x) = g(x) [/math] for almost every [math] x \in E [/math].
Conclusion: [math] f = g [/math] a.e.
Step 2: Converse argument.
Assume [math] f = g [/math] a.e. on [math] E [/math]. Let [math] E_0 [/math] be the set where [math] f \neq g [/math], so [math] m(E_0) = 0 [/math].
For any [math] \eta > 0 [/math],
[math] \lim_{n \to \infty} m(\{x \in E : |f_n(x) - f(x)| > \eta\}) = 0 [/math]
implies:
[math] \lim_{n \to \infty} m(\{x \in E \setminus E_0 : |f_n(x) - g(x)| > \eta\}) = 0 [/math].
Thus, [math] \{f_n\} \to g [/math] in measure on [math] E [/math].
Q.E.D.
Anonymous at Thu, 14 Nov 2024 01:20:09 UTC No. 16474156
>>16473980
>>16473847
>>16474018
>>16474029
https://en.wikipedia.org/wiki/Conve
Check counter examples for more info, so >>16473980 is correct. Sorry Fren.
Example:
Let [math] f_n(x) = \chi_{[n, \infty)}(x) [/math], where [math] \chi_{[n, \infty)}(x) [/math] is the characteristic function of the interval [n, ∞):[math]
f_n(x) =
\begin{cases}
1, & \text{if } x \geq n, \\
0, & \text{if } x < n.
\end{cases}
[/math]
**Pointwise convergence to zero**
- For any fixed [math] x \in \mathbb{R} [/math], eventually [math] n > x [/math].
- Thus, [math] f_n(x) = 0 [/math] for all sufficiently large [math] n [/math].
- Therefore, the pointwise limit is:
[math] f(x) = 0 \text{ for all } x \in \mathbb{R}. [/math]
- Conclusion: [math] f_n(x) \to 0 [/math] pointwise for all [math] x \in \mathbb{R} [/math].
**Does not converge to zero in measure**
- To check convergence in measure, we look at:
[math] E_n = \{ x \in \mathbb{R} : |f_n(x) - f(x)| > \epsilon \} [/math].
- Since [math] f(x) = 0 [/math], this simplifies to:
[math] E_n = \{ x \in \mathbb{R} : f_n(x) > \epsilon \} [/math].
- For any [math] \epsilon > 0 [/math], we have:
[math] E_n = [n, \infty) [/math].
- The measure of this set is:
[math] m(E_n) = \infty [/math].
- Since [math] m(E_n) [/math] is infinite for all [math] n [/math], it does **not** go to zero as [math] n \to \infty [/math].
Conclusion:
- The sequence [math] \{f_n\} [/math] converges pointwise to [math] f(x) = 0 [/math] for all [math] x \in \mathbb{R} [/math].
- However, it does **not** converge to zero in measure because the set [math] E_n [/math] where [math] f_n(x) \neq 0 [/math] always has infinite measure.
Anonymous at Thu, 14 Nov 2024 06:20:16 UTC No. 16474412
>>16473947
https://math.stackexchange.com/ques
Anonymous at Thu, 14 Nov 2024 09:06:07 UTC No. 16474533
>>16473856
Idea for (a) (I didn't double check): Fix an [math]\varepsilon>0[/math].
[math]f[/math] is integrable, so there exists [math]M>1[/math] with [math]\int_{\mathbb R \setminus [-M,M]}|f| <\varepsilon[/math].
Note then also that, for each [math]n \in\mathbb N[/math], [math]\int_{\mathbb R \setminus[-M,M]}|f_n| \leq \int_{\mathbb R \setminus[-M+1,M-1]}|f|<\varepsilon
Since [math]f[/math] is continuous, it is uniformly continuous on [math][-M,M][/math].
Then there exists some [math]m \in\mathbb N[/math] such that for each [math]n \geq m[/math], [math]\sup\limits_{|x|\leq M}|f_n(x)-f(x)|<\frac{\varepsilon}{
It follows that for each [math]K>0[/math], [math]\{|f_n|>K\}\cap[-M,M]\subsete
Now [math]\int_{\{|f_n|>K\}\cap [-M,M]}|f_n|\;|g|\leq ||g||_\infty \int_{\{|f|>K+\varepsilon\}\cap[-M,
Next, we bound [math]\int_{\{|f_n|>K\}}|f_n| |g|\leq \int_{\{|f_n|>K\}\cap[-M,M]}|f_n|\;
Then even [math]\sup\limits_{n \geq m}\int_{\{|f_n|>K\}}|f_n| |g|\leq ||g||_\infty \int_{\{|f|>K+\varepsilon\}\cap[-M,
Since [math]f[/math] is integrable, as [math]K \to \infty[/math] the RHS tends to [math]2\varepsilon ||g||_\infty[/math].
Since any finite collection of integrable functions is UI, we also have [math]\lim\limits_{K \to \infty}\sup\limits_{n \in \mathbb N}\int_{\{|f_n|>K\}}|f_n| |g| \leq 2\varepsilon ||g||_\infty[/math].
Then let [math]\varepsilon \to 0[/math].
Anonymous at Thu, 14 Nov 2024 16:01:40 UTC No. 16474890
>>16473397
Update on what I have. I've still yet to prove uniqueness of decomposition, and really I'm still just hand-waving that last result that if [eqn]\int_{[a,b]} g' F_C = 0 [/eqn]
[math]\forall g \in AC[a,b] [/math] where [math] g(a) = g(b) = 0 [/math], then [math] F_C = K [math] constant.
Anonymous at Thu, 14 Nov 2024 20:03:48 UTC No. 16475163
Is anyone here smart enough to explain why implicit differentiation works?
Like
x^2 + y^2 = 25
so
2x + 2yy' = 0?
Anonymous at Fri, 15 Nov 2024 05:29:38 UTC No. 16475736
>>16475163
It's an application of the chain rule.
Think of the derivative as an operator.
so
[eqn]
g(y) = f(x) \implies \frac{d}{dx} g(y) = \frac{d}{dx}f(x)
[/eqn]
and we know from the chain rule that [math] \frac{d g(y)}{dx} = \frac{dg(y)}{dy} \frac{dy}{dx} [/math]
putting it together
[eqn]
g(y) = f(x) \implies \frac{d}{dx} g(y) = \frac{d}{dx}f(x) \implies \frac{dg(y)}{dy} \frac{dy}{dx} = \frac{d}{dx}f(x) \implies \frac{dg(y)}{dy} \frac{dy}{dx} - \frac{d}{dx}f(x) = 0
[/eqn]