I am a person. Not a hexadecimal value.

  • 0 Posts
  • 10 Comments
Joined 3 months ago
cake
Cake day: May 7th, 2025

help-circle
  • I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.

    But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?

    You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.

    For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.

    To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).

    Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).

    🤷‍♂️


  • Gary Marcus is certainly good. It’s not as if I think say, LeCun, or any of the many people who think that LLMs aren’t the way are morons. I don’t think anyone thinks all the problems are currently solved. And I think long time lines are still plausible, but, I think dismissing short time line out of hand is thoughtless.

    My main gripe is how certain people are about things they know virtually nothing about. And how slap dashed their reasoning is. It seems to me most people’s reasoning goes something like “there is no little man in the box, it’s just math, and math can’t think.” Of course, they say it with a lot fancier words, like “it’s just gradient decent” as if human brains couldn’t have gradient decent baked in anywhere.

    But, out of interest what is your take on the Stochastic Parrot? I find the arguments deeply implausible.



  • I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

    When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

    Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.







  • I think this is a fundamentally different way of thinking that we have. The way to win at betting is to have a better distribution than the other guy. You don’t need to have high confidence about any particular outcome, you just need to recognize when the distribution of odds differ from the true odds. Strong favorites in a low information environment indicate a chance to make good money. Sure, any given low information environment, like a papal conclave, may occur infrequently, and you could lose money on any one event, as is the nature of betting, but if you can recognize when the market is likely to be wrong you can extract money from it. Really, that is the only way to extract money from it.