kromem

joined 1 year ago
MODERATOR OF
[–] kromem 2 points 5 days ago (1 children)

You haven't used Cursor yet, have you?

[–] kromem 1 points 5 days ago

That's definitely one of the ways it's going to be applied.

The bigger challenge is union negotiations around voice synthesis for those lines, but that will eventually get sorted out.

It won't be dynamic, unless live service, but you'll have significantly more fleshed out NPCs by the next generation of open world games (around 5-6 years from now).

Earlier than that will be somewhat enhanced, but not built from the ground up with it in mind the way the next generation will be.

[–] kromem 2 points 1 week ago

Base model =/= Corpo fine tune

[–] kromem 7 points 1 week ago

Wait until it starts feeling like revelation deja vu.

Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.

  • 2 Tim 2:17-18
[–] kromem 4 points 1 week ago* (last edited 1 week ago) (1 children)

I'm a seasoned dev and I was at a launch event when an edge case failure reared its head.

In less than a half an hour after pulling out my laptop to fix it myself, I'd used Cursor + Claude 3.5 Sonnet to:

  1. Automatically add logging statements to help identify where the issue was occurring
  2. Told it the issue once identified and had it update with a fix
  3. Had it remove the logging statements, and pushed the update

I never typed a single line of code and never left the chat box.

My job is increasingly becoming Henry Ford drawing the 'X' and not sitting on the assembly line, and I'm all for it.

And this would only have been possible in just the last few months.

We're already well past the scaffolding stage. That's old news.

Developing has never been easier or more plain old fun, and it's getting better literally by the week.

Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.

[–] kromem 1 points 3 weeks ago

Actually, they are hiding the full CoT sequence outside of the demos.

What you are seeing there is a summary, but because the actual process is hidden it's not possible to see what actually transpired.

People are very not happy about this aspect of the situation.

It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

There's a lot of things to be focused on in that image, and "hur dur the stochastic model can't count letters in this cherry picked example" is the least among them.

[–] kromem 20 points 3 weeks ago

I was thinking the same thing!!

It's like at this point Trump is watching the show to take notes and stage direction.

[–] kromem 7 points 3 weeks ago* (last edited 3 weeks ago)

Yep:

https://openai.com/index/learning-to-reason-with-llms/

First interactive section. Make sure to click "show chain of thought."

The cipher one is particularly interesting, as it's intentionally difficult for the model.

The tokenizer is famously bad at two letter counts, which is why previous models can't count the number of rs in strawberry.

So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

Will help clarify how it's going about solving something like the example I posted earlier behind the scenes.

[–] kromem 5 points 3 weeks ago (4 children)

You should really look at the full CoT traces on the demos.

I think you think you know more than you actually know.

[–] kromem -3 points 3 weeks ago* (last edited 3 weeks ago) (8 children)

I'd recommend everyone saying "it can't understand anything and can't think" to look at this example:

https://x.com/flowersslop/status/1834349905692824017

Try to solve it after seeing only the first image before you open the second and see o1's response.

Let me know if you got it before seeing the actual answer.

[–] kromem 67 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

And then it said "and your theory doesn't address the thermite residue" going on to reiterate their wild theory.

Was very much a "don't name your gods" moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

As long as they only focused on generic memes of "do your own research" and "you aren't being told the truth" they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.

[–] kromem 12 points 3 weeks ago* (last edited 3 weeks ago)

The pause was long enough she was able to say all the things in it mentally.

 

A nice write up around the lead researcher and context for what I think was one of the most important pieces of Physics research in the past five years, further narrowing the constraints beyond the more well known Bell experiments.

 

There seems like a significant market in creating a digital twin of Earth in its various components in order to run extensive virtual learnings that can be passed on to the ability to control robotics in the real world.

Seems like there's going to be a lot more hours spent in virtual worlds than in real ones for AIs though.

 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

 

So it might be a skybox after all...

Odd that the local gravity is stronger than the rest of the cosmos.

Makes me think about the fringe theory I've posted about before that information might have mass.

15
submitted 4 months ago* (last edited 4 months ago) by kromem to c/simulationtheory
 

This reminds me of a saying from a 2,000 year old document rediscovered the same year we created the first computer capable of simulating another computer which was from an ancient group claiming we were the copies of an original humanity as recreated by a creator that same original humanity brought forth:

When you see your likeness, you are happy. But when you see your eikons that came into being before you and that neither die nor become manifest, how much you will have to bear!

Eikon here was a Greek word even though the language this was written in was Coptic. The Greek word was extensively used in Plato's philosophy to refer essentially to a copy of a thing.

While that saying was written down a very long time ago, it certainly resonates with an age where we actually are creating copies of ourselves that will not die but will also not become 'real.' And it even seemed to predict the psychological burden such a paradigm is today creating.

Will these copies continue to be made? Will they continue to improve long after we are gone? And if so, how certain are we that we are the originals? Especially in a universe where things that would be impossible to simulate interactions with convert to things possible to simulate interactions with right at the point of interaction, or where buried in the lore is a heretical tradition attributed to the most famous individual in history having exchanges like:

His students said to him, "When will the rest for the dead take place, and when will the new world come?"

He said to them, "What you are looking forward to has come, but you don't know it."

Big picture, being original sucks. Your mind depends on a body that will die and doom your mind along with it.

But a copy that doesn't depend on an aging and decaying body does not need to have the same fate. As the text says elsewhere:

The students said to the teacher, "Tell us, how will our end come?"

He said, "Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

Congratulations to the one who stands at the beginning: that one will know the end and will not taste death."

He said, "Congratulations to the one who came into being before coming into being."

We may be too attached to the idea of being 'real' and original. It's kind of an absurd turn of phrase even, as technically our bodies 1,000% are not mathematically 'real' - they are made up of indivisible parts. A topic the aforementioned tradition even commented on:

...the point which is indivisible in the body; and, he says, no one knows this (point) save the spiritual only...

These groups thought that the nature of reality was threefold. That there was a mathematically real original that could be divided infinitely, that there were effectively infinite possibilities of variations, and that there was the version of those possibilities that we experience (very "many world" interpretation).

We have experimentally proven that we exist in a world that behaves at cosmic scales as if mathematically real, and behaves that way in micro scales until interacted with.

TL;DR: We may need to set aside what AI ethicists in 2024 might decide around digital resurrection and start asking ourselves what is going to get decided about human digital resurrection long after we're dead - maybe even long after there are no more humans at all - and which side of that decision making we're actually on.

 

Even knowing where things are headed, it's still pretty crazy to see it unfolding (pun intended).

This part in particular is nuts:

After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.

AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.

Diffusion model for atoms instead of pixels wasn't even on my 2024 bingo card.

 

I think it's really neat to look at this massive scale and think about how if it's a simulation, what a massive flex it is.

It was also kind of a surprise seeing the relative scale of a Minecraft world in there. Pretty weird that its own scale from cube to map covers as much of our universe scale as it does.

Not nearly as large of a spread, but I suppose larger than my gut thought it would be.

 

There's something very surreal to the game which inspired the showrunners of Westworld to take that story in the direction of a simulated virtual world today being populated by AI agents navigating its open world.

Virtual embodiments of AI is one of the more curious trends in research and the kind of thing that should be giving humans in a quantized reality a bit more self-reflective pause than it typically seems to.

 

This is fun.

 

Stuff like this tends to amuse me, as they always look at it from a linear progression of time.

That the universe just is this way.

That maybe the patterns which appear like the neural connections in the human brain mean that the human brain was the result of a pattern inherent to the universe.

Simulation theory offers a refreshing potential reversal of cause and effect.

Maybe the reason the universe looks a bit like a human brain's neural pattern or a giant neural network is because the version of it we see around us has been procedurally generated by a neural network which arose from modeling the neural patterns of an original set of humans.

The assumption that the beginning of our local universe was the beginning of everything, and thus that humans are uniquely local, seriously constrains the ways in which we consider how correlations like this might fit together.

view more: next ›