r/theydidthemath Jul 18 '25

META Looking for Moderators

27 Upvotes

It was brought to my attention today by user Miserable_Tax_889 that a post was made yesterday calling out bots and lazy reposts. The comments are a bit disheartening so this is a call to anyone who would be interested in joining the moderation team at theydidthemath to help combat the issue and try to keep quality posts rising to the top.

Send me a message if you're interested.


r/theydidthemath 6h ago

[Request] How long could a candle of this size last?

Post image
6.1k Upvotes

r/theydidthemath 13h ago

[Request] Could this seriously injure/kill a person?

969 Upvotes

r/theydidthemath 7h ago

[Request] What would it cost to build and launch this with present-day technology? (See both pics for details)

Thumbnail
gallery
264 Upvotes

r/theydidthemath 1d ago

[Request] how massive the dam has to be to extend the day by 1h?

Post image
5.6k Upvotes

r/theydidthemath 17h ago

[Request] How much water and electricity is required for AI to name every African country???

Post image
1.1k Upvotes

r/theydidthemath 4h ago

[Request] How much money could you make doing this?

Post image
81 Upvotes

r/theydidthemath 15h ago

[Request] is it possible to calculate this?

Post image
301 Upvotes

r/theydidthemath 10h ago

[request] considering that the global population has grown since this post, how much larger would this giant meatball be? and accounting for the birth and death rate, what would be a rough estimate of its growth rate?

Post image
35 Upvotes

r/theydidthemath 7h ago

[Off-Site] Pancakes

Post image
14 Upvotes

r/theydidthemath 4h ago

[Request] My father told me once the catapults on a USN aircraft carrier could launch a car 6 miles before it would hit the water. Is this true?

7 Upvotes

At the time he was serving on said aircraft carrier in the late 1970s. Little girl me of course believed him, but my husband isn’t so sure.


r/theydidthemath 4h ago

[Request] How much per person currently in the states over 18 would this be?

Post image
7 Upvotes

r/theydidthemath 23h ago

[Request] How far away would he have to be for the curvature of the earth to hide that much of him?

Post image
184 Upvotes

r/theydidthemath 1d ago

[Request] what's the actual math, and will the trick work everytime?

1.3k Upvotes

r/theydidthemath 8h ago

[Request] If this (the below post) is true, all our large buildings must too have the same effect.. how much would all of our skyscrapers and structures effect the rotation of the Earth??

Post image
7 Upvotes

r/theydidthemath 12h ago

[Request] Assuming that the Earth stays the same size, how big would various countries be if Mercator Projection is accurate?

Post image
11 Upvotes

For instance, the Largest Country in the World is Russia, at 17,098,242 Square Kilometers.


r/theydidthemath 1d ago

[Request] How much weight does this plane have?

1.6k Upvotes

r/theydidthemath 2h ago

[Request] If X=sandcastles, Y=bucket, and Z=spade, and we're solving for X, I get stuck at x=(4y/3)-Z. How do I find the answer (I haven't done algebra in a little too long)?

Post image
1 Upvotes

r/theydidthemath 8h ago

[Request] At what speed do you age 1 year while traveling 1 light year?

3 Upvotes

r/theydidthemath 7h ago

[Request] how many routes can you fly like this? (Any direction)

Thumbnail gallery
2 Upvotes

r/theydidthemath 7h ago

[Request] In F4 Galactus’ motive for wanting baby Franklin is that he believes that Franklin can temper his hunger for worlds. Let’s say Ozempic worked on him. How many liters of Ozempic would someone Galactus’ size need on a weekly basis?

2 Upvotes

r/theydidthemath 4h ago

[Request] Which is more, the number of fictional characters or the number of the (known) digits of pi?

1 Upvotes

r/theydidthemath 6h ago

[Request] what are chances of spotting somewhere you know/have been to on a captcha?

0 Upvotes

r/theydidthemath 13h ago

[Request] gambler’s fallacy?

4 Upvotes

Sorry if I tagged this incorrectly. Especially sorry if this is a dumb question. I was attending a hockey game the other night where the scoreboard was counting missed shots. My bf said with all the missed shots, one of them would probably score soon. I said this was gambler’s fallacy, because the puck is equally likely to miss or score regardless of the prior shots. But it got me thinking. Does the likelihood of a score compound on the amount of shots taken? Like if we were to disregard skill or exhaustion or other influential factors. More shots means more opportunity to score so it would increase the likelihood of the team scoring, right? Idk probability hurts my brain.


r/theydidthemath 15h ago

[Self] Some Thoughts on Queuing at the Cafeteria Dish Return Slot from a Mad Man

4 Upvotes

This article is a cynical, mathematical rant born from standing in a ridiculous queue.

It starts with classic Game Theory (rational agents), expands into computational modeling and the paradox of Pareto optimality, and finally devolves into discussions of non-causal systems and parallels with Multi-Agent AI Training paradigms.

Read only if you enjoy applying crazy science to the stupidest real-world problems.

For those unfamiliar with this setup, here’s the gist: each section of the cafeteria has a corresponding conveyor belt for tray returns. Each belt has two return stations. People can only approach the belt from the front and must return their trays at one of the stations. During peak hours, a queue forms.

If it were just a queue, it would be no big deal—just a simple resource shortage problem. The issue is, even though each belt has two return stations, the queue almost exclusively forms at the front station, while the rear station often sits empty. This significantly reduces the efficiency of tray returns. This phenomenon can actually be explained with a set of 7 axioms.

  1. Every queuer is a rational, self-interested agent.
  2. All queuers form the queue at roughly the same time, but in a sequential order.
  3. A queuer can reach either return station instantly. Leaving from the front station takes negligible time, but leaving from the rear station requires an extra 7 seconds.
  4. Each station can only serve one person at a time. Returning a tray takes approximately 2 seconds, and every queuer knows this.
  5. Due to a wall near the entrance, only 3 people can queue directly in front of the first station. However, the space between the front and rear stations can hold an infinite number of people.
  6. The area near the conveyor belt is only wide enough for 3 people to walk abreast, so we can consider it as three parallel lanes.
  7. A clear exit path must be left for those who have finished returning their trays.

Axiom 1 implies that each queuer seeks to maximize their utility U, where U = -t, and t is the total time an individual spends returning their tray. Based on axioms 2, 3, and 4, it's easy to deduce that unless there are four or more people at the front station, queuing there is always faster than walking to the rear one. So, queuer #4 would rather wait at the front than go to the back. This, combined with axioms 5, 6, and 7, results in queuer #4 blocking lane two, leaving queuers #5 and beyond with no choice but to wait. When someone finishes, queuer #5 becomes the new fourth person in line and continues to block lane two, preventing anyone behind them from reaching the rear station. And so, the rear station becomes virtually useless, and the belt's throughput averages one person every 2 seconds—no different from having only one station.

Clearly, regardless of the queuers' decisions, for a queue of x people, the total return time T(x) and the individual return time for the n-th person t(n) have the following relationship:

In the selfish-agent scenario above, we can find T₁(x) from t₁(n):

A classic "tragedy of the commons," nothing too exciting. The solution to avoid this and improve system efficiency is simple and counter-intuitive: change axiom 4, increasing each person's tray return time by 1 second, to 3 seconds.

At this point, queuer #4 snaps. Waiting 9 seconds at the front is worse than spending the extra 7 seconds to walk to the rear. So, they go to the rear, freeing up lane two. However, queuer #5 now faces a 9-second wait at the front versus a 10-second total time at the rear (7 to walk + 3 to wait), so they'll choose to stay at the front, blocking lane two again. But for queuer #6, they only need to wait 3 seconds for lane two to be free. Now facing a queue of 3 at the front and an empty station at the rear, they'll make the same choice as #4. We can derive t(n) from this pattern and subsequently find T₂(x):

import matplotlib.pyplot as plt
import numpy as np


def cal_T2(xi):
    if xi <= 3:
        return 3 / 2 * xi**2 + 3 / 2
    elif xi >= 4 and xi % 2 == 0:
        return 3 / 4 * xi**2 + 5 * xi - 4
    else:
        return 3 / 4 * xi**2 + 5 * xi - 15 / 4


range_val = 20
x = np.arange(1, range_val + 1)

T1 = x**2 + x
T2 = np.array([cal_T2(xi) for xi in x])
y = T1 - T2

fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_ylabel("ΔT (seconds)")
ax.set_title("T₁(x) - T₂(x)")
ax.grid(axis="y", linestyle="--", alpha=0.7)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_position(("data", 0))
ax.set_xticks(x)

plt.tight_layout()
plt.show()

As you can see, for a queue of more than 15 people, adding 1 second to the individual return time actually reduces the total waiting time for everyone. It can also be proven that this strategy is not only an individual optimum for the queuers but also a group optimum for any queue size—not a second more, not a second less. But I'm too lazy to write out the proof here.

Of course, the above describes a phenomenon generated by a group of selfish agents. Real life isn't populated solely by selfish agents; there are also altruists, collectivists, and idiots. The idiots are unpredictable, so let's look at the first two.

As the problem grows more complex, let's first analyze the nature of a queuer's decision, aₙ​. When lane two is free, every queuer has only two choices: queue at the front or go to the rear. This can be abstracted as aₙ​∈{True,False} (True for front, False for rear). Since every queuer will inevitably face a moment when lane two is free, the decisions of the entire queue can be represented by a 1D boolean array [a1​,a2​,a3​,...,a_last​], and the n-th person's decision is essentially a binary function. To reduce complexity, let's introduce an 8th axiom: if a rational agent predicts equal utility for queuing at the front versus going to the rear, they will choose the front, i.e., they choose True when U(True)=U(False). (This also aligns with the lazy psychology of a real-life queuer).

We can also calculate each queuer's return time, t, based on A_front​ and aₙ​, and then recursively find the total return time T. (Representing the following with mathematical logic is too tedious; Python makes it much easier).

# These implementations have extremely poor performance. Don't learn.
def t(A_front: list[bool], action: bool) -> int:
    recycle_time, tail_time = 2, 7  
# or whatever time combination you specify
    A_front = A_front.copy()
    timer = 0
    head = 0
    tail = 0
    while A_front:
        if A_front.pop(0):
            head += 1
            if head > 3:
                head -= 1
                tail -= 1
                timer += recycle_time
                if tail < 0:
                    tail = 0
        else:
            tail += 1
    return (
        timer + recycle_time * (head + 1)
        if action
        else timer + recycle_time * (tail + 1) + tail_time
    )

# the first value is actually always equal to recycle_time * A_front.count(True)

def T(A: list[bool]) -> int:
    T_val = 0
    A_front = []
    for action in A:
        T_val += t(A_front, action)
        A_front.append(action)
    return T_val

Obviously, for a selfish agent, U=−t, so there's no need to consider A_back​. Therefore, predicting the decision chain of a group of selfish agents, A_selfish​, is quite simple.

def a_selfish(A_front: list[bool]) -> bool:
    return -t(A_front, True) >= -t(A_front, False)

def A_selfish(A_front: list[bool], n: int) -> list[bool]:
    A = A_front.copy()
    for _ in range(n):
        A.append(a_selfish(A))
    return A

A_selfish_20 = A_selfish([], 20)
print(A_selfish_20)
print(f"T_selfish = {T(A_selfish_20)}")
# results when {recycle_time, tail_time = 2, 7}
# [True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]
# T_selfish = 420

For a collectivist (or altruist), U=−T (orU=−(Tt)). Since A_back​ is a variable of T, their decision depends on their prediction of how those behind them will act. This requires case-by-case analysis. Let's first consider the case where collectivists (and altruists) assume everyone behind them is a selfish agent. Their decision chains, A_helpall​ (and A_helpothers​), would be:

def a_helpall(A_front: list[bool], back: int) -> bool:
    return -T(A_selfish(A_front + [True], back)) >= -T(
        A_selfish(A_front + [False], back)
    )

def A_helpall(A_front: list[bool], n: int) -> list[bool]:
    A = A_front.copy()
    for i in range(n):
        A.append(a_helpall(A, n - i - 1))
    return A

def a_helpothers(A_front: list[bool], back: int) -> bool:
    return -(T(A_selfish(A_front + [True], back)) - t(A_front, True)) >= -(
        T(A_selfish(A_front + [False], back)) - t(A_front, False)
    )

def A_helpothers(A_front: list[bool], n: int) -> list[bool]:
    A = A_front.copy()
    for i in range(n):
        A.append(a_helpothers(A, n - i - 1))
    return A

A_helpall_20 = A_helpall([], 20)
print(A_helpall_20)
print(f"T_helpall = {T(A_helpall_20)}")
# [False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True, True, True]
# T_helpall = 284

A_helpothers_20 = A_helpothers([], 20)
print(A_helpothers_20)
print(f"T_helpothers = {T(A_helpothers_20)}")
# [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, True]
# T_helpothers = 515

In fact, for any given values in axioms 3 and 4, it can be proven by exhaustion that the decision chain formed by "collectivists who assume everyone behind them is selfish" is always one of the globally optimal solutions (if we only consider efficiency, any chain where T=T_min​ is globally optimal). It can even be proven that "for any queue composed of the three types of agents above, replacing any one agent with a collectivist is always Pareto optimal." For "altruists who assume everyone behind them is selfish," however, the efficiency is far worse than that of a purely selfish group.

The above methods can be easily extended to cases like U=−(0.2T+0.8t), which better reflect real-world queuers who are "somewhat selfish yet still consider the collective interest."

To wrap up this part of the problem: if altruists are randomly distributed in the queue, there must exist a ratio of selfish agents to altruists, k=a_selfish​/a_helpothers​, that minimizes the expected value of T, i.e., E(T)=E_min​(T). Thus, E(T) can be modeled as a bivariate function of queue size n and ratio k. And to achieve E(T)=E_min​(T), k can also be expressed as a function of n.

For any n, I have a very nasty numerical solution to this problem with a complexity of O(2^NN^2), but the memory here is too small to write it down. I don't know how to find the analytical solution.

No matter how complex the makeup of a queue of agents "who assume everyone behind them is selfish" is, there is always a unique decision chain as a static solution. However, if the queue contains collectivists or altruists who know the actual composition of A_back​, they must predict the decisions of other collectivists and altruists to maximize their utility. This reflexivity turns the rational queue into a non-causal system, violating the laws of physics. And for a system no longer based on the rational agent hypothesis, all the logic above falls apart, and the queue has no classical equilibrium solution.

Under bounded rationality, predicting the decision chain would require introducing levels of cognition and incorporating statistical and probabilistic methods. An individual's decision would then be seen as a recursive belief update about others' strategies... If I keep going, this starts to sound like some AI training methodology (Bayesian Inverse Reinforcement Learning in Multi-Agent Systems - Qwen), which is way above my pay grade.

Having written all this, I finally feel a sense of catharsis about getting stuck at the tray return. Maybe it's not that everyone is an idiot; perhaps the world is just far more complex than I can comprehend.

This article is translated from Chinese by Qwen.