# Is Using Someone's Reason Against Them Fraud?

## Recommended Posts

The one number 20, and 20 number ones have exactly the same value relative to each other because the numbers represent values fixed to each other by the principles of mathematics.

20 x 1 = 20

1 x 20 = 20

This is not arbitrary. Any equation with a non equal sign in the middle can only mean that one person has to lose in order to transfer money to the person who won. And that's zero sum.

You're committing an equivocation fallacy. Numerical value is not the same kind of value that people act on.

• Replies 208
• Created

#### Popular Days

Sounds familiar. I've read a blocked traumatic event can cause symptoms of not feeling emotions. Not because of not having the capacity to feel them but a deep fear of recalling the event. I think your therapist sucks. As in "I made up a word! I'm done. Give me your money." Big whoopee, how 'bout a little help here? I've been there btw, except I could feel one emotion: rage. Lots of rage for a couple of decades. I had to stay away from people a lot. Now I rarely feel rage, except for the occasional commute driving Tourette's.

Well, actually, my therapist was quite helpful.

I don't think any traumatic events are the cause of my alexithymia. I've been like this ever since I can remember.

##### Share on other sites

The one number 20, and 20 number ones have exactly the same value relative to each other because the numbers represent values fixed to each other by the principles of mathematics.

20 x 1 = 20

1 x 20 = 20

This is not arbitrary. Any equation with a non equal sign in the middle can only mean that one person has to lose in order to transfer money to the person who won. And that's zero sum.

You're committing an equivocation fallacy. Numerical value is not the same kind of value that people act on.

Of course not.

The value of numbers in relation to other numbers is fixed and utterly impersonally objective. Honest people who deal in reality know this. In contrast, dishonest people who try to get something for nothing are anything but objective, for they love lies.

Everyone who cheats others will be cheated by others... but you can't cheat an honest man.

Greg

##### Share on other sites

Of course not.

The value of numbers in relation to other numbers is fixed and utterly impersonally objective. Honest people who deal in reality know this. In contrast, dishonest people who try to get something for nothing are anything but objective, for they love lies.

Everyone who cheats others will be cheated by others... but you can't cheat an honest man.

Greg

Which is still completely irrelevant because that's the wrong kind of value.

##### Share on other sites

Of course not.

The value of numbers in relation to other numbers is fixed and utterly impersonally objective. Honest people who deal in reality know this. In contrast, dishonest people who try to get something for nothing are anything but objective, for they love lies.

Everyone who cheats others will be cheated by others... but you can't cheat an honest man.

Greg

Which is still completely irrelevant because that's the wrong kind of value.

...only for those who have the wrong kind of values.

Greg

##### Share on other sites

...only for those who have the wrong kind of values.

Greg

Is your grasp of logic so tenuous that you have to resort to ad hominem attacks to support your argument?

##### Share on other sites

...only for those who have the wrong kind of values.

Greg

Is your grasp of logic so tenuous that you have to resort to ad hominem attacks to support your argument?

It's not an argument. It is clear that we each live our respective lives guided by a totally different set of ethical values, and from that simple fact arises each of our different views.

Greg

##### Share on other sites

It's not an argument. It is clear that we each live our respective lives guided by a totally different set of ethical values, and from that simple fact arises each of our different views.

Greg

Maybe. But yours is wrong.

##### Share on other sites

It's not an argument. It is clear that we each live our respective lives guided by a totally different set of ethical values, and from that simple fact arises each of our different views.

Greg

Maybe. But yours is wrong.

Ah, the "my argument is bigger than your argument," therefore I win paradigm.

Doesn't work dear.

Can you possible be cogent as to why?

Greg is difficult to argue with in standard forms.

You will have to use a lot more effort to be clear on this one.

A...

##### Share on other sites

In the first post that I quoted, you calculated the expected payoff one way. Later, you used a different method. I'll get to that in a moment.

I noticed this too and I went to the library today to get some books and figure out what exactly was going wrong. As it turns out, both calculations are wrong. The game is not sequential, despite being turn-based. In a sequential game, neither player is allowed to switch strategies, but this is obviously untrue for the auction. The problem is that the game-tree is infinite, and so one cannot get an accurate picture of what's going on by picking an arbitrary cutoff point and trying to calculate from there.

The right way to look at it is, at each point of the game, for each player to ask whether or not to drop out. At the beginning of the game, dropping out gives you \$0 as does bidding only \$0, so there is no reason to play the game, but there is also no reason not to. I'll get to this a little later. Assume that the game begins anyway. Then, at each round n ( n greater than or equal to 0), the value of dropping out (for the player who bids first and with 1 dollar increments) is -( 2n + 1), whereas the value of not dropping out is p*(20 - n - 1), where p is the probability that the opponent will drop out. Now, a player should drop out only when -(2n + 1) > p*(20 - n - 1), i.e. when there is more to gain by dropping out than continuing with a probability p of the other player dropping out. Solving this inequality for n we have,

n < (19p +1)/(2(p - 1)),

and for the even player we have,

n < 10p/(p -1)

But the term on the right hand side is negative for all p whereas n is always non-negative. Thus, for both players, once the game has started, it is always better to continue than to drop out, regardless of the probability of the other player dropping out. Unless, of course, the probability of the other player dropping out is 1, in which case the analysis is slightly different but the conclusion is the same. If p is 1, then the calculation is -2n - 1 > 20 - 2n - 1 leads to 0 > 20 which is false, and we conclude that it is not better to drop out of the auction if the other player is guaranteed to drop out on his next turn.

Not true. As soon as they realize they are caught in an escalation game, they realize that their expected payoff is zero or negative. In that case, their best strategy --- the strategy that will minimize their losses --- is to drop out as quickly as possible.

This is incorrect as the calculation above shows, because they have to minimize their losses. The best way to do this is to win the \$20 from Carl rather than drop out immediately because that will reduce the loss by \$20.

I agree that that is indeed the point. The problem is that in this case you are leaving yourself vulnerable to exploitation exactly by assuming that the probability that Alice will drop out after the next bid is 50%. You've proven that. In fact, you've argued quite persuasively that if both players make the maximum entropy assumption, that neither player will drop out, ever (or until they run out of money). But, that means that the probability of the other player dropping out is essentially zero. If that's true, then the expected payoff is also essentially zero using your calculation.

The fact is that not every situation in which one is ignorant --- lacking information --- is a valid probability problem. There may be no rational way to assign probabilities in certain circumstances. It's sort of like trying to time the stock market. If everyone is equally well informed, there is no way to do better than average. As you said in another post, no matter what scheme you devise, you can't expect to win. If that is the case, then your expected payoff is zero or negative. So, unlike a naive analysis might suggest, it is not rational to play the game. It is not rational to bid any amount greater than zero.

This brings us to the important point. As the expected value of dropping out of the auction at the beginning is \$0 and the value of bidding \$0 on the first move is also \$0, there is no reason to drop out right at the beginning nor is there any reason not to.

Now, if it is, in fact, true that it is rational not to play, and Alice deduces this, Bob can deduce that Alice would deduce that and thus predict that she will drop out at the beginning. He can then bid \$0 and win \$20. This means that an irrational agent would outperform a rational one, which is a problem if you think that reason should be one's guide to action.

Damn it. I made a mistake. At the beginning of the auction, one can drop out right away and get \$0, or one can bid \$0, in which case the value is 20*p where p is the probability that the opponent will drop out on that round. Bidding \$0 is a positive value for all p except when p = 0. Therefore, it is rational to play the game, unless one is absolutely certain that the opponent will never drop out. But even then, my argument still holds.

##### Share on other sites

Ah, the "my argument is bigger than your argument," therefore I win paradigm.

Doesn't work dear.

Can you possible be cogent as to why?

Greg is difficult to argue with in standard forms.

You will have to use a lot more effort to be clear on this one.

A...

Greg doesn't even have an argument to begin with. I don't see why I should bother with it at all.

##### Share on other sites

Damn it. I made a mistake.

Like I explained to you, a 10 year old NY City kid would own you.

##### Share on other sites

I wonder what it feels like to make a mistake (???).

--Brant

arguing with Greg, btw, is like arguing with a piece of taffy, wondering how to get an advantage

##### Share on other sites

It's not an argument. It is clear that we each live our respective lives guided by a totally different set of ethical values, and from that simple fact arises each of our different views.

Greg

Maybe. But yours is wrong.

From the values you live by, I'm certain it appears that way to you. It's a truth beyond argument that each of us sees the values we chose not to live by as being wrong. That's why we each did not choose them. And only the consequences of each of our own personal lives have the final say on that... and not each other. You alone know best how the life you fully deserve is turning out. While all I know is that there is a world of difference between our values.

Greg

##### Share on other sites

Ah, the "my argument is bigger than your argument," therefore I win paradigm.

Doesn't work dear.

Can you possible be cogent as to why?

Greg is difficult to argue with in standard forms.

You will have to use a lot more effort to be clear on this one.

A...

Greg doesn't even have an argument to begin with. I don't see why I should bother with it at all.

Hold onto that thought.

It's as close to the truth

as you'll ever get.

Greg

##### Share on other sites

Damn it. I made a mistake.

Like I explained to you, a 10 year old NY City kid would own you.

That's the problem with being a snake...

...there are always bigger snakes.

Greg

##### Share on other sites

I wonder what it feels like to make a mistake (???).

--Brant

arguing with Greg, btw, is like arguing with a piece of taffy, wondering how to get an advantage

Excellent Brant.

Damn close to the Tai Chi way he "absorbs" the "opponents" arguments and reverses them.

Vlad Putin is a highly advance Judo martial artist and, as Sun Tzu explained, your army must remain balance as you direct the opponents force and position to your opponents disadvantage.

You need to remain centered and balanced.

##### Share on other sites

I wonder what it feels like to make a mistake (???).

--Brant

arguing with Greg, btw, is like arguing with a piece of taffy, wondering how to get an advantage

Excellent Brant.

Damn close to the Tai Chi way he "absorbs" the "opponents" arguments and reverses them.

Vlad Putin is a highly advance Judo martial artist and, as Sun Tzu explained, your army must remain balance as you direct the opponents force and position to your opponents disadvantage.

You need to remain centered and balanced.

That would be Tai Kwan Do.

##### Share on other sites

You need to remain centered and balanced.

That would be Tai Kwan Do.

Bob:

That is a basic principle for all leveraged fighting, sports, etc.

A...

##### Share on other sites

In the first post that I quoted, you calculated the expected payoff one way. Later, you used a different method. I'll get to that in a moment.

I noticed this too and I went to the library today to get some books and figure out what exactly was going wrong. As it turns out, both calculations are wrong. The game is not sequential, despite being turn-based. In a sequential game, neither player is allowed to switch strategies, but this is obviously untrue for the auction. The problem is that the game-tree is infinite, and so one cannot get an accurate picture of what's going on by picking an arbitrary cutoff point and trying to calculate from there.

The right way to look at it is, at each point of the game, for each player to ask whether or not to drop out. At the beginning of the game, dropping out gives you \$0 as does bidding only \$0, so there is no reason to play the game, but there is also no reason not to. I'll get to this a little later. Assume that the game begins anyway. Then, at each round n ( n greater than or equal to 0), the value of dropping out (for the player who bids first and with 1 dollar increments) is -( 2n + 1), whereas the value of not dropping out is p*(20 - n - 1), where p is the probability that the opponent will drop out. Now, a player should drop out only when -(2n + 1) > p*(20 - n - 1), i.e. when there is more to gain by dropping out than continuing with a probability p of the other player dropping out. Solving this inequality for n we have,

n < (19p +1)/(2(p - 1)),

and for the even player we have,

n < 10p/(p -1)

But the term on the right hand side is negative for all p whereas n is always non-negative. Thus, for both players, once the game has started, it is always better to continue than to drop out, regardless of the probability of the other player dropping out. Unless, of course, the probability of the other player dropping out is 1, in which case the analysis is slightly different but the conclusion is the same. If p is 1, then the calculation is -2n - 1 > 20 - 2n - 1 leads to 0 > 20 which is false, and we conclude that it is not better to drop out of the auction if the other player is guaranteed to drop out on his next turn.

I realized this morning that neither previous calculation was correct. However, the current calculation is still not correct.

For the player that goes first, the value of dropping out is -(2n - 1) if we start counting at 1. The problem is the assertion that the value of not dropping out is p * (20 - n - 1). First, the formula should be p * (20 - (2n + 1)) for consistent use of n. For example, if Alice bid \$1 on turn one, then she would be out \$1 if she quit. However, if she bid \$3 on round two and Bob didn't bid again, she would win \$20 - \$3 = \$17.

The problem is that the formula only gives the expected payoff if the opponent drops out. The question is, what is the payoff if the opponent doesn't drop out? Then, the expected payoff is (1 - p) * something. That something is hard to evaluate because of the infinite game tree, as you say. But, that doesn't mean that it's ok to ignore it. The correct strategy for each player is to attempt to evaluate all of the infinite possibilities to see where they might lead. Of course, it is not possible to list all of the possibilities, but it is obvious that if the bidding reaches \$20 or more, the payoff will be negative. So, if there is essentially no chance of winning before getting to \$20, there is no reason to play. Even if there is a positive probability of winning before getting to \$20, if there is a substantial probability of the game going beyond \$20, the player's expected payoff could be negative, which would be a reason not to play. In fact, given your argument that once a player has started to play such a game, he must continue to play in order to minimize his losses, it seems likely that his losses could be quite substantial, whether he ultimately beats his nominal opponent or not. In fact, it seems quite clear to me that no matter how far in the hole a player is, it is in his interest to quit as soon as possible.

Not true. As soon as they realize they are caught in an escalation game, they realize that their expected payoff is zero or negative. In that case, their best strategy --- the strategy that will minimize their losses --- is to drop out as quickly as possible.

This is incorrect as the calculation above shows, because they have to minimize their losses. The best way to do this is to win the \$20 from Carl rather than drop out immediately because that will reduce the loss by \$20.

The problem is that your analysis is incorrect so I'll stand by what I said. A player's expected payoff is always zero or negative.

I agree that that is indeed the point. The problem is that in this case you are leaving yourself vulnerable to exploitation exactly by assuming that the probability that Alice will drop out after the next bid is 50%. You've proven that. In fact, you've argued quite persuasively that if both players make the maximum entropy assumption, that neither player will drop out, ever (or until they run out of money). But, that means that the probability of the other player dropping out is essentially zero. If that's true, then the expected payoff is also essentially zero using your calculation.

The fact is that not every situation in which one is ignorant --- lacking information --- is a valid probability problem. There may be no rational way to assign probabilities in certain circumstances. It's sort of like trying to time the stock market. If everyone is equally well informed, there is no way to do better than average. As you said in another post, no matter what scheme you devise, you can't expect to win. If that is the case, then your expected payoff is zero or negative. So, unlike a naive analysis might suggest, it is not rational to play the game. It is not rational to bid any amount greater than zero.

This brings us to the important point. As the expected value of dropping out of the auction at the beginning is \$0 and the value of bidding \$0 on the first move is also \$0, there is no reason to drop out right at the beginning nor is there any reason not to.

Now, if it is, in fact, true that it is rational not to play, and Alice deduces this, Bob can deduce that Alice would deduce that and thus predict that she will drop out at the beginning. He can then bid \$0 and win \$20. This means that an irrational agent would outperform a rational one, which is a problem if you think that reason should be one's guide to action.

It's only a problem if you think that people like Carl will go around dangling \$20 bills in front of people's faces. We've been concentrating on Alice and Bob, but there are really three players in this game. Carl is also a player. So, if it is rational for Alice and Bob to decline playing and Bob deduces that fact and bids \$0 for \$20, Carl will be out \$20. Given that, Carl will quickly give up on the game knowing that he is likely to lose money.

The situation that you describe is quite unusual. It is a situation in which there is essentially no right answer. So, Carl can play a gambit and he might come out ahead and Bob can play a gambit and he might come out ahead, but there is no guarantee or even a probability argument that can be made in favor of either person's actions. If Carl and Bob and Alice were to engage in such activities repeatedly, their amortized expected payoff would be zero as they each tried to outsmart each other with no rational basis for their actions. This isn't a case of irrationality winning out over rationality. It is only an act of futility.

Darrell

##### Share on other sites

It's only a problem if you think that people like Carl will go around dangling \$20 bills in front of people's faces. We've been concentrating on Alice and Bob, but there are really three players in this game. Carl is also a player. So, if it is rational for Alice and Bob to decline playing and Bob deduces that fact and bids \$0 for \$20, Carl will be out \$20. Given that, Carl will quickly give up on the game knowing that he is likely to lose money.

The situation that you describe is quite unusual. It is a situation in which there is essentially no right answer. So, Carl can play a gambit and he might come out ahead and Bob can play a gambit and he might come out ahead, but there is no guarantee or even a probability argument that can be made in favor of either person's actions. If Carl and Bob and Alice were to engage in such activities repeatedly, their amortized expected payoff would be zero as they each tried to outsmart each other with no rational basis for their actions. This isn't a case of irrationality winning out over rationality. It is only an act of futility.

Darrell

Well put Darrell. You aptly summed up the curse of zero sum.

Greg

##### Share on other sites

You need to remain centered and balanced.

That would be Tai Kwan Do.

Bob:

That is a basic principle for all leveraged fighting, sports, etc.

A...

...but not for leveraged "investments".

Greg

##### Share on other sites

...but not for leveraged "investments".

Greg

Pretty accurate. I just want to grow up and be like Hillary, who has no knowledge of:

1) Whitewater;

2) Rose law firm billing records;

3) a key aide found in Marcy Park;

4) Bimbos and Benghazi;

5) Cattle futures;<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<her "leveraged" investment...gag

6) getting too tired to list the multiple felonies that these two have been involved in...

##### Share on other sites

...but not for leveraged "investments".

Greg

Pretty accurate. I just want to grow up and be like Hillary, who has no knowledge of:

1) Whitewater;

2) Rose law firm billing records;

3) a key aide found in Marcy Park;

4) Bimbos and Benghazi;

5) Cattle futures;<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<her "leveraged" investment...gag

6) getting too tired to list the multiple felonies that these two have been involved in...

Obviously qualified to be POTUS.

--Brant

what do ya think it's really all about?

##### Share on other sites

I realized this morning that neither previous calculation was correct. However, the current calculation is still not correct.

For the player that goes first, the value of dropping out is -(2n - 1) if we start counting at 1. The problem is the assertion that the value of not dropping out is p * (20 - n - 1). First, the formula should be p * (20 - (2n + 1)) for consistent use of n. For example, if Alice bid \$1 on turn one, then she would be out \$1 if she quit. However, if she bid \$3 on round two and Bob didn't bid again, she would win \$20 - \$3 = \$17.

You're right. I had the right equations saved to a png file but it didn't allow me to upload them in my post, and I wrote down the wrong ones by accident but the solutions are right, regardless. In my scheme, n starts at 0 (which is really the second round of the auction), and the right equations are:

-(2n + 1) > p*(20 - (2n + 1))

-2n - 1 > 20 -2pn - p

2(p - 1)n - 1 > 19p

n < (19p + 1)/(2(p - 1))

and

-2n > p(20 - 2n)

-2n > 20p - 2pn

2(p - 1)n > 20p

n < 10p/(p - 1).

So really, nothing changes.

The problem is that the formula only gives the expected payoff if the opponent drops out. The question is, what is the payoff if the opponent doesn't drop out? Then, the expected payoff is (1 - p) * something. That something is hard to evaluate because of the infinite game tree, as you say. But, that doesn't mean that it's ok to ignore it. The correct strategy for each player is to attempt to evaluate all of the infinite possibilities to see where they might lead. Of course, it is not possible to list all of the possibilities, but it is obvious that if the bidding reaches \$20 or more, the payoff will be negative. So, if there is essentially no chance of winning before getting to \$20, there is no reason to play. Even if there is a positive probability of winning before getting to \$20, if there is a substantial probability of the game going beyond \$20, the player's expected payoff could be negative, which would be a reason not to play. In fact, given your argument that once a player has started to play such a game, he must continue to play in order to minimize his losses, it seems likely that his losses could be quite substantial, whether he ultimately beats his nominal opponent or not. In fact, it seems quite clear to me that no matter how far in the hole a player is, it is in his interest to quit as soon as possible.

An infinite game tree cannot be evaluated. One must start evaluations at the end of the game tree and work backwards. But as an infinite game tree has no end, that obviously won't work.

Here is the problem with that line of reasoning. Let's say that Alice predicts that the bidding will reach \$20 or more. If she drops out at \$20 or more, then her payoff is -\$20 or less, and if she drops out at \$0, her payoff is \$0. Thus, Alice drops out at \$0, but Bob bids \$0 and wins \$20. However, the bidding never goes past \$20 or more. A contradiction. Therefore, neither player can predict that the bidding will reach \$20 or more and remain consistent.

It's only a problem if you think that people like Carl will go around dangling \$20 bills in front of people's faces. We've been concentrating on Alice and Bob, but there are really three players in this game. Carl is also a player. So, if it is rational for Alice and Bob to decline playing and Bob deduces that fact and bids \$0 for \$20, Carl will be out \$20. Given that, Carl will quickly give up on the game knowing that he is likely to lose money.

The problem here is that, since Alice is just as rational as Bob and since she has just as much information as he does, she can deduce that if Bob enters the game and bids only \$0, then she can out-bid him by bidding \$1 for a gain of \$19. The game begins and Carl wins.

The situation that you describe is quite unusual. It is a situation in which there is essentially no right answer. So, Carl can play a gambit and he might come out ahead and Bob can play a gambit and he might come out ahead, but there is no guarantee or even a probability argument that can be made in favor of either person's actions. If Carl and Bob and Alice were to engage in such activities repeatedly, their amortized expected payoff would be zero as they each tried to outsmart each other with no rational basis for their actions. This isn't a case of irrationality winning out over rationality. It is only an act of futility.

This is exactly my point. If Alice, Bob, and Carl all take turns using this gambit on each other, then they can only ever achieve a re-distribution of money, and never actually create any value. The meta-game is zero-sum.

This is what I meant when I said that Carl's gambit is not fundamentally different from him holding up Alice and Bob at gunpoint. Hence why I conclude that this kind of gambit is coercive and therefore immoral.