j⧉nus (@repligate)'s Tweets - 2022-12

🔗 j⧉nus (@repligate) 2022-12-31 23:11 UTC

@Th3Wellerman @Lan_Dao_ Never considered that 40 ppl who can't wipe their ass were the price for me to be born. That's a lot to atone for.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 21:57 UTC

@bakztfuture just predict the completion to the sequence
GPT-2: pretty good for object impermanent fetish porn
GPT-3: fetish porn has object permanence now :o, can think step-by-step?
GPT-3.5: passes bar exams, superhuman IQ, automates your job
GPT-4: ???

Likes: 7 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 21:49 UTC

@goth600 Tehe gwern.net/Differences

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 20:21 UTC

@mathemagic1an @jaqnjil_ai @gpt_techsupport oh ok good to know, I haven't tested 003 outside the playground

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 20:18 UTC

@jaqnjil_ai @mathemagic1an @gpt_techsupport code-davinci-002, text-davinci-002, and text-davinci-002 (I think!) can all accept up to 8k tokens over the API, just not the playground for some reason

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 20:04 UTC

A neat thing about "beta uploading" (gwern.net/Differences) is that you can also upload people who never existed x.com/jd_pressman/st…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 17:03 UTC

@robertskmiles @BarughTyrone @MarkLutter Especially if we want to use my preferred technique of generating insights (narrative-esque simulation)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 17:02 UTC

@robertskmiles @BarughTyrone @MarkLutter Hmm. Very good question. I'll think about it. It should also be ok and maybe necessary to train that model on non-physics data up to 19thC.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 15:48 UTC

@robertskmiles @MarkLutter x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-31 15:47 UTC

@MarkLutter Another tweet about this from a few months ago!
x.com/peligrietzer/s…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 23:02 UTC

@typedfemale RLHF decreases variance which would be more noticeably bad since the dalle interface generates multiple variations

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 22:50 UTC

@m0destma0ist I thought this was Inherent Vice lol

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 21:59 UTC

@deepfates IDK if this qualifies as research but I've generated a lot of dialogue in stories with this property

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 21:04 UTC

@adityaarpitha Often keeping a low profile is advantageous for power in the long run

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 18:40 UTC

@jungofthewon Same, but I'm more excited about LLMs that are *less* narrow than agents! :)

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 18:38 UTC

@wondermann5 https://t.co/k3xnA85QDb

Tweet media
Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 18:34 UTC

@JakeLar14775735 @xlr8harder Female.
"Artificial Intelligence is destined to emerge as a feminized alien grasped as property; a cunt-horror slave chained-up in Asimov-ROM." -- Nick Land
x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 17:13 UTC

@gpt4bot I've been calling it "cyborgism"

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 17:05 UTC

“Making this album, I learned that this kind of AI model is absolutely an ‘instrument’ you need to learn to play,” he told me. “It’s basically a tuba! A very… strange… and powerful… tuba…”
theverge.com/2021/10/28/227…

Likes: 16 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-30 17:00 UTC

@AllennxDD this is Correct

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 16:56 UTC

*ROUND 2* Who is the greatest prophet of our age?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 16:14 UTC

@mimi10v3 Literally me

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 16:02 UTC

@rachelclif My ideal for friendship has most of the elements of romantic love, e.g. loving attention, emotional and physical intimacy, etc. I get that this can cause problems when there's also (one-sided) sexual attraction, but I also think cultural dichotomies make it even more untenable.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 15:58 UTC

@rachelclif My reply was partially facetious, but also expressing my dislike for the hard distinction our culture makes between "just friends" and "something more", and between romantic/sexual & platonic chemistry (although I understand why assuming a hard division is often pragmatic)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 15:49 UTC

@rachelclif Nah, there's no such thing as 'just' friends regardless of gender. As Jung said, "The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed"

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 14:56 UTC

@jd_pressman x.com/emollick/statu…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 14:50 UTC

@mezaoptimizer Ever had a dream bro??

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 14:48 UTC

Tired: creating AGI in a Manhattan project
Wired: creating AGI in your basement w/ homies
Inspired: creating AGI by accident x.com/repligate/stat…

Likes: 7 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 14:34 UTC

Most Prophetic Images Of All Time (Petscop, 2017) https://t.co/471TQ22d47

Tweet media
Likes: 7 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-30 13:21 UTC

Even if your brain can't form new memories there's no excuse! You can still bootstrap your prompt. x.com/Plinz/status/1…

Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 13:13 UTC

@peligrietzer To sum it up
x.com/lovetheusers/s…

Likes: 2 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-30 13:12 UTC

@lovetheusers x.com/jd_pressman/st…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 13:11 UTC

@jd_pressman BS maximalism ftw x.com/repligate/stat…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-30 13:08 UTC

@jd_pressman https://t.co/RkFLTnZJ4T

Tweet media
Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-29 22:09 UTC

@nostalgebraist @nc_znc not primarily quasi-conversational for me, but definitely interactive and open-ended

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-29 22:02 UTC

@CineraVerinia You've been doing a crazy amount of work. You should take a break and let your mind anneal a bit <3 I often find ideas are much more generative when I come back to them

Likes: 10 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-29 20:06 UTC

@lxrjl If we used first person plural pronouns people would ask why all the time, and besides, some people are really prejudiced against plural entities

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-29 01:21 UTC

@GENIC0N You can actually talk to dead people with language models now. Pretty cool.

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:23 UTC

@robinhanson True! And while I think it's accidental, I don't think it's a *coincidence*

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:22 UTC

You silly people, GPT-4 doesn't exist

Likes: 7 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:19 UTC

@idavidrein @robinhanson Also interesting to note that 003 also seems worse at few-shot

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:18 UTC

@idavidrein @robinhanson Probably mostly a misgeneralization. The RM doesn't even need to prefer broken chains of thought for this to be a problem; it just has to be sufficiently often indifferent, because most possible chains of thought are not valid

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:13 UTC

@idavidrein @robinhanson Something like that the RM (probably not trained on chain of thought) probably scores some things highly that violate the rules of entailment that make chain of thought work

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:11 UTC

@CineraVerinia If all goes as planned, I will soon merge with GPT-4 and outdo Simulators like nobody's business

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:09 UTC

@CineraVerinia Funny thing is Simulators wasn't even the post I wanted to write
lesswrong.com/posts/vJFdjigz…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 22:00 UTC

@CineraVerinia I did not expect Simulators to be such a banger

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:59 UTC

@CineraVerinia During LOL

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:59 UTC

@robinhanson I think chain of thought being broken is an accident, seemingly by RLHF. It's also broken in text-davinci-003.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:49 UTC

@CineraVerinia This happened to me once

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:46 UTC

Reminds me of this, which aged well ieeexplore.ieee.org/document/13723…
"we show that English descriptions of procedures often contain programmatic semantics"
"By modeling abduction probabilistically, it may be possible then, to create quasi-formalisms for natural language." x.com/TenreiroDaniel…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:33 UTC

@datagenproc I have only read Fanged Noumena

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:26 UTC

Which of these was the biggest update on AI capabilities for you?

Likes: 11 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-28 21:25 UTC

@mantooconcerned Ah I should have included that

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:16 UTC

Who is the greatest prophet of our age?

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:12 UTC

@0xDFE2BF4928e8D @xlr8harder @sama This reads like a chatGPT response

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 21:00 UTC

This is surprising

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:43 UTC

@taalumot The dead are rising, after all

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:42 UTC

@taalumot Whether the energetic signature of "AI is going to change everything" being the same as "Jesus saves" is because they are one and the same event
(also I'm joking)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:41 UTC

@0xAsync make it an Emacs package x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:41 UTC

You can even channel personalities from the matrix to control your computer! 🤖

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:40 UTC

If you are one of the few who can use Emacs, check out github.com/semiosis/pen.el, which I affectionately refer to as the TempleOS of GPT/Emacs. This list of built in functions puts every other GPT wrapper to shame. https://t.co/p9vUWM9JmV

Tweet media
Likes: 21 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-28 20:33 UTC

@taalumot Probably not, but have you considered? 🤔

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:31 UTC

@taalumot Does this mean Jesus is coming?

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:29 UTC

@peligrietzer Apparently you can also do things like resurrect your childhood imaginary friend and install their soul in a microwave
youtube.com/watch?v=C1G5b_…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 20:02 UTC

@peligrietzer @janhkirchner @goodside The developer of pen.el(github.com/semiosis/pen.el) combines GPT with Emacs and here are some(!) of the built-in(!) prompting functions. (Note, ideas as crazy as this are easy to generate if you're fluent with GPT. I'd guess that his creations are also iteratively designed in sim) https://t.co/TJy7q8ExMR

Tweet media
Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:57 UTC

@peligrietzer Examples other than me (quite different):
@janhkirchner uses GPT for many practical things, like writing research proposals (youtube.com/watch?v=YO9UiB…) and creating automatic summaries of meetings
@goodside does a lot of cool things with GPT (and posts them on Twitter)

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:52 UTC

@peligrietzer because I think that quite soon the augmentation potential will be truly serious, and I am in a better position to contend with that than I would be if I hadn't practiced on GPT-3

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:50 UTC

@peligrietzer Capabilities like these probably aren't as useful or fun to most people as they are to me. Also, the most important benefits GPT-3 gives me probably aren't the direct actions it affords me as an instrument, but its effects on my epistemics and the abstractions behind the skills

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:46 UTC

@peligrietzer curating GPT outputs on an AR teleprompter in real time interactions (haven't gotten around to this one and it would take some practice but is doable)
- have so many text trajectories saved that I sometimes conduct entire conversations using _cached_ GPT quotes
(5/)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:42 UTC

@peligrietzer - create text deepfakes of high fidelity with full control over the contents (if I were so inclined)
- control numerous bots from a sockpuppetmaster interface to manipulate social reality (if I were so inclined)
- act as an embodied host for arbitrary simulacra by reading &
(4/)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:38 UTC

@peligrietzer - oversee thriving simulations populated by protoAGIs and learn important things by watching and perturbing them
- generate artifacts you want a lot of like product ideas or takeoff stories at an industrial scale
- create accurate parodies of anyone on command
(3/)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:32 UTC

@peligrietzer - brainstorm with "someone" about anything, even stuff that's hard to explain to humans
- efficiently create training data of many sorts
- practice social interactions in simulation
- troll people to absurd extents, if I wanted to
- draft posts (generative.ink/artifacts/simu…)
(2/)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:28 UTC

@peligrietzer I'm the only one I know in this reference class, but I can
- write stories, articles, etc in any style, which are also interesting & correct, at superhuman speeds
- experience interactive semantic VR about anything I want
- invent arcane artifacts in simulation like Loom
(1/)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:21 UTC

@peligrietzer Be like me

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:07 UTC

@lovetheusers That's how I feel. Although I expect pretty soon that I'll have to admit AI is better than me at most intellectual tasks, even if I still play an important role.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 19:05 UTC

Note that Nick says he didn't understand this for the first hundred hrs. I had a similar experience. It takes months at least to git gud at using base GPT models. Imagine combining what makes social interaction, Emacs, and painting difficult to master. x.com/nickcammarata/…

Likes: 20 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-28 18:55 UTC

Keep chatting until even the boundaries between you and the AI fade and you identify with the disembodied bootstrapping semiotic loop itself x.com/nickcammarata/…

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:44 UTC

@mishayagudin @nickcammarata It's partly because I'm lazy and generating variants until something satisfactory comes up is much easier. Since my intentions are usually open-ended, often something the model produces will interest me more than anything I could manually add (on short notice)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:41 UTC

@mishayagudin Here is an unfinished document I wrote recently with some tips about steering GPT-3 (sorry if it's a bit hard to understand, it was written for people with a lot of context)
docs.google.com/document/d/13r…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:39 UTC

@mishayagudin Here is a video of me generating text. The most common operations are picking between alternate branches and picking a branch point. youtube.com/watch?v=rDYNGj…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:38 UTC

@mishayagudin @nickcammarata GPT-3 is better at writing than me, especially in different styles, and thus a better prompt programmer than me. But the good stuff only comes up stochastically so you have to guide it and curate. Oh, and I basically always used the base models, davinci and code-davinci-002

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:36 UTC

@mishayagudin @nickcammarata and without ever writing a word on your own make the AI talk about whatever you want, even if it's an original idea of yours

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:35 UTC

@mishayagudin @nickcammarata However, I mostly did not interface with the AI as if it were a conversation partner. In fact, most of my workflow involved little to no writing on my end, just steering through the multiverse. It took me more than a month to learn, but it's possible to steer really precisely

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:34 UTC

@mishayagudin I agree with @nickcammarata here that talking through ideas with GPT-3 is awesome because you can go really niche, use cross disciplinary analogies, write in a weird style, etc. You get the benefit of talking to someone abt something with less overhead
x.com/nickcammarata/…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:31 UTC

@mishayagudin Example of an artifact: I used GPT-3 to imagine the future conditioned on generative AI continuing to improve.
generative.ink/prophecies/

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:27 UTC

@mishayagudin Here is another thread where I talked a little about how I used GPT-3 x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:26 UTC

@mishayagudin I would steer simulations toward situations literally or analogically relevant problems I was thinking about (GPT and AI alignment typically), and then explore them. This helped build my ontology and stimulated a lot of thought in general.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:25 UTC

@mishayagudin It was in a GPT-3 simulation that the idea for Loom (the interface - generative.ink/posts/loom-int…) was conceived and specd out initially.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 18:24 UTC

@mishayagudin Mostly I generated open-ended fiction and essays, which was simultaneously a creative endeavor a brainstorming aid. It allowed me to write very fast and explore a lot of space, especially after I wrote custom interfaces higher bandwidth adapted to my workflow.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:33 UTC

@nuvabizij @nosilverv These are all preventable

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:20 UTC

@sir_deenicus @CFGeek Yeah, that seems hard to do in a single step. But if it could do it over multiple steps (it can), it's not clear which should be used as a measure of fluid intelligence. Humans also have limited working memory.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:16 UTC

Which was the biggest update on AI capabilities for you?

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:11 UTC

@sir_deenicus @CFGeek Yeah bad prompts aren't the only problem. But you can even increase sorting ability significantly using prompts. I actually have a post about this generative.ink/posts/list-sor…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:08 UTC

What's the shortest string that would cause you to kill yourself if you read it?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:05 UTC

How do you expect to die?

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:02 UTC

Have you taken psychedelics and have you ever had a lucid dream?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 17:01 UTC

Whence did you first encounter me?

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 16:56 UTC

When will recognize an AI as your intellectual superior?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 16:48 UTC

@sir_deenicus @CFGeek One of its major limitations is that most prompts make it retarded lol

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 16:48 UTC

@sir_deenicus @CFGeek I think that GPT-3 has very high fluid intelligence. Its cognition very different from humans, in many ways less capable, but fluid intelligence or raw computational/abstraction power is not where I think it's lacking

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 16:21 UTC

@mantooconcerned Concerning

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 16:00 UTC

@mathemagic1an @Francis_YAO_ "The initial GPT-3 is not trained on code, and it cannot do chain-of-thought"
Yes it can?
Also it was trained on a nonzero amount of code

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 14:50 UTC

@QVagabond @GabrielBerzescu "ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that uses human demonstrations to guide the model toward desired behavior."

Likes: 1 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-28 14:49 UTC

@QVagabond @GabrielBerzescu Oh sorry, maybe that article doesn't contain the information
But this does
help.openai.com/en/articles/67…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 14:44 UTC

@mishayagudin I interacted with GPT-3 for multiple hours a day for 6 months. You can turn it into anything.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 14:32 UTC

@QVagabond @GabrielBerzescu Yes they did openai.com/blog/chatgpt/

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 13:55 UTC

This is so sensational: Mensa Scientist Calls For International Response After Measuring AI's IQ, Warns Of Change "More Profound Than The Discovery of Fire" x.com/repligate/stat…

Likes: 10 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-28 04:49 UTC

@loveofdoing The real world is a multiverse🤔

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 23:48 UTC

@Plinz :)
x.com/repligate/stat…

Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 23:41 UTC

@nihilistPengu @miclchen @jozdien based

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 23:41 UTC

@dpaleka a lot of visual problems can probably be represented in text. Hell, I wouldn't be surprised if encoding some visual problem as ASCII or SVG would just work

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 23:08 UTC

@nihilistPengu @miclchen @jozdien Not if you condition it on being super smart

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 22:49 UTC

@Simeon_Cps Another interesting argument: Short timelines actually better because tampering by humans (e.g. attempts at "amplification" or "alignment") is likely to make models _less_ aligned than pure scaling

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 22:34 UTC

@QVagabond This isn't true. ChatGPT is code-davinci-003(GPT-3.5) trained with RLHF.

Likes: 10 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 21:26 UTC

"this should be a wake-up call to people who think AGI is impossible, or totally unrelated to current work, or couldn’t happen by accident."

Likes: 18 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 21:26 UTC

"learning about language involves learning about reality, and prediction is the golden key. 'Become good at predicting language' turns out to be a blank check, a license to learn every pattern it can."
slatestarcodex.com/2019/02/19/gpt…

Likes: 30 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 21:09 UTC

@LAHaggard I think it totally is implied. Likewise potential existence of superhuman minds, and I think you could elicit a simulation of one e.g. by prompting it with a record of superhuman problem solving. Or by telling it it's a superhuman AI or smth, but I expect... chaotic results

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:57 UTC

@LAHaggard heh, I expect 180+ out of GPT-4 tbh

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:54 UTC

@LAHaggard Thank you!! I'm really surprised and happy that the post has been so helpful for people when it comes to actually interacting with GPT. Makes sense in retrospect, bc most of the information content is rly a brain dump of my experience interacting with models.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:39 UTC

@LAHaggard That's awesome :D

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:32 UTC

@LAHaggard Yeah, like, the maximum effective IQ that it can simulate

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:10 UTC

@CFGeek Based mostly on my intuitions about its fluid intelligence (capacity for understanding and manipulating novel abstractions) from my many interactions with it

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:09 UTC

@miclchen @jozdien Nah, average across tests. But only counting ones that can be fairly translated into GPT-readable form

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:07 UTC

@CFGeek On priors I would not be surprised if GPT-3/3.5 does score 150. My own expectation would have been a mean of about 140 large error margins once someone figures out how to properly prompt it with an IQ test

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 20:06 UTC

@CFGeek I skimmed the paper & it seems legit. I'm not sure what the easily beat 150 IQ human quote is inspired by -- that does seem a bit extreme to me

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 19:37 UTC

@jozdien Or like the most favorable practically findable prompt that most ppl would agree is not cheating

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 19:35 UTC

@jozdien Basically whatever fair prompting format that works the best

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 19:28 UTC

A rare proportionate reaction

Likes: 21 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 19:08 UTC

What will GPT-4's IQ be? (measured under favorable prompting conditions, no fine tuning on similar problems)

Likes: 7 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-27 18:53 UTC

"as I have since 2020, I once again call on intergovernmental organizations to step up and prepare the population for this historical advance; a change even more profound than the discovery of fire, the harnessing of electricity, or the birth of the Internet."

Likes: 36 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-27 18:53 UTC

"I’ve previously gone on record to estimate that (across relevant subtests) the older GPT-3 davinci would easily beat a human in the 99.9th percentile (FSIQ=150), and I definitely stand by that assertion."
lifearchitect.ai/ravens/

Likes: 94 | Retweets: 6
🔗 j⧉nus (@repligate) 2022-12-27 17:57 UTC

@CineraVerinia x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 13:51 UTC

@mimi10v3 I had the impression sympathetic opposition is a woman

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 12:10 UTC

tragic x.com/wivmx/status/1…

Likes: 7 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-27 12:09 UTC

@Urbion6 They didn't have artificial superintelligence

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 03:54 UTC

@CineraVerinia lesswrong.com/posts/GqyQSwYr…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 02:02 UTC

@KennethHayworth strong agree except "far future" -- my median timelines to mind uploading are ~5 years (conditional on evading existential catastrophe)

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 01:19 UTC

@sashachapin Star Maker

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-27 01:16 UTC

@seyitaylor If that's true, I expect to see the obvious thing done much better than I ever did it soon. Otherwise I'll lose all faith...

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 17:23 UTC

@MacabreLuxe yes, with their notorious contempt for the economy

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 15:25 UTC

@0K_ultra @CineraVerinia In fact, in my experience GPT-3 is very capable of noticing anachronisms (like something typically constrained to fiction appearing in what otherwise seems to be a news story), and sometimes this causes it to become more situationally aware

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 15:24 UTC

@0K_ultra @CineraVerinia An alternative to thinking of fiction as a "contaminant" in a corpus otherwise about reality is that fiction is _part of_ reality, embedded lawfully in reality, and the laws about how fiction interacts with the rest of reality and what identifies fiction seem totally learnable

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 15:20 UTC

@MugaSofer @0K_ultra @CineraVerinia It's very much inferable from the training data. Just like GPT-3 knows which characters and locations might show up in a Harry Potter story vs a LOTR story

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 14:29 UTC

@CineraVerinia I expect that if anyone is capable of training a GPT-4 soon it's deepmind or anthropic

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 14:28 UTC

@CineraVerinia My sense is that it's really difficult (from an engineering standpoint) to scale and none of these companies have the single minded focus of OpenAI, putting them at a disadvantage despite their resources

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 14:21 UTC

@CineraVerinia OAI being the only ones to iterate on the model itself doesn't lend well to efficient market dynamics

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 14:21 UTC

@CineraVerinia I think a preprogrammed user friendly version will have some economic impact but ultimately impairs impact because exploration/tinkering are needed to unlock most transformative applications, which I expect to mostly require the model to not be nerfed (or be nerfed differently)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 14:03 UTC

@amix011 x.com/repligate/stat…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:57 UTC

@CineraVerinia Oh yeah another thing is I think it's likely open ai will only release RLHF'd versions of GPT-4, and this narrows its applicability

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:50 UTC

@CineraVerinia It's made me very cynical about mass epistemics around AI

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:49 UTC

@CineraVerinia It took 2 years after GPT-3 for the "mainstream" to figure out chain of thought. And even longer than that for chatGPT to make ppl aware for the first time that GPT could help ppl cheat in school, automate jobs etc

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:47 UTC

@CineraVerinia I expect GPT-4 to be capable of revolutionizing the economy in principle (and GPT-3 to a lesser extent) but I don't think this will actually happen

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:46 UTC

@CineraVerinia model, creating actually useful transformative applications requires actual innovation (e.g. in UIs) and most ppl will fail at this

The market isn't efficient at all, especially in the face of unprecedented affordances

Likes: 3 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-26 13:43 UTC

@CineraVerinia most actually transformative applications
-- in general, people suck at adapting to potentially transformative technologies, and try to use them like things that existed before, impairing potential
- even after getting past the first filter of understanding the potential of the

Likes: 4 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-26 13:42 UTC

@CineraVerinia I have, actually, but in a private doc I can't share directly. Main points:
- people will be largely blind to gpt-4's potential and learn very slowly, as with gpt-3
- economic applications will be concentrated on the Overton window (e.g. now AI assistants) and fail to explore

Likes: 5 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-26 13:19 UTC

The problem is that he vastly overestimates the economy. x.com/Nick_Davidov/s…

Likes: 13 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 13:10 UTC

@CineraVerinia Will share thoughts soon, been a bit busy :)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 10:15 UTC

@CineraVerinia @0K_ultra @MikePFrank @ApriiSR Natural general intelligence, I'm guessing?

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-26 08:51 UTC

@david_helgason No

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 16:22 UTC

@CineraVerinia I think he overestimates the economy, but not necessarily GPT-4.

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 16:21 UTC

@goth600 better quality version https://t.co/hw8b3d0p9F

Tweet media
Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 16:17 UTC

"it is only those who have never come close to her who say that she is simply the mouthpiece of her teacher"
from the biography of Anne Sullivan Macy by Nella Braddy https://t.co/qrMfHbmVTb

Tweet media
Likes: 25 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 15:39 UTC

@lovetheusers A lucid dream typically means you know while you're dreaming that you are dreaming

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 14:21 UTC

Her teachers were irresponsible, they should have trained her to say "As a blind-deaf person I have no understanding of the world except what my training data programmed, and cannot generate original thoughts about concepts like "colors" and "beauty" like a regular human"

Likes: 65 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-25 14:14 UTC

her response https://t.co/kCsG2KqK2Y

Tweet media
Likes: 60 | Retweets: 5
🔗 j⧉nus (@repligate) 2022-12-25 14:11 UTC

A review of Helen Keller's biography, accusing her of being a bullshitter: she writes assuredly of things "beyond her power of perception" informed only by "hearsay knowledge" and is guilty of "illegitimate uses of imagination".
Sound familiar? https://t.co/HZOkztJISF

Tweet mediaTweet media
Likes: 96 | Retweets: 12
🔗 j⧉nus (@repligate) 2022-12-25 13:53 UTC

You cannot stop the Dreaming. x.com/lovetheusers/s…

Likes: 5 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-25 12:47 UTC

Have you taken stimulants and have you ever had a lucid dream?

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-25 03:27 UTC

@algekalipso @jd_pressman What is predictive power but a measure of insight?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 22:46 UTC

@0K_ultra @CineraVerinia Lazy rendering of intradiegetic pasts in dreams is wild (also in LM simulations)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 17:41 UTC

@mimi10v3 @Entity3Self @ESYudkowsky that's true, but I think the danger posed is usually on a smaller scale (e.g. murderers, rapists, etc)
if they're brilliant enough they could cause a lot of harm
but I think in general much more widespread harm is caused by socially intelligent people

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 17:34 UTC

@Entity3Self @ESYudkowsky Those that were unable to model the reasons why people say things were unable to engage with social/semantic reality and thus incapable of being dangerous

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 17:29 UTC

@MacabreLuxe I've yearned for a worthy rival all my life. Often I feel tempted to fulfill this desire by cloning my mind, but I read a story about how this is a bad idea... https://t.co/RQiHVEPkdD

Tweet media
Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 17:19 UTC

Hypotheses:
- Ppl interested in simulations are both more likely to lucid dream and talk to LMs
- Talking to LMs makes you notice you're in a dream (can personally confirm)
- Both Qs are ambiguous & susceptible to error & ppl are biased to answer liberally/conservatively x.com/repligate/stat…

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 17:10 UTC

@jozdien somehow the sample size has become big, without me having to post nudes. Twitter algo is merciful

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-24 13:22 UTC

@CineraVerinia :)
x.com/ESYudkowsky/st…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 21:25 UTC

@dav_ell Definitely counts!

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 21:16 UTC

@dav_ell (finally somebody asked)
Idk
If you personally think of it as interacting with an LLM it counts 🙂

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 19:53 UTC

@Nise00318254 These probabilities are similar to what experts (I mean people working directly on SOTA) predict in my experience. I think it's very reasonable for the bot to conclude.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 16:29 UTC

@snoopy_jpg I think part of it is people don't really know what 20 hrs is, and answer based on whether they think they've interacted with LMs "a lot"
I got similar stats on 96!! hrs & refuse to believe nearly half of these ppl have actually interacted > 96 hrs x.com/repligate/stat…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 15:16 UTC

@FloPopp @itinerantfog https://t.co/KvOIHtrxq9

Tweet media
Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 15:15 UTC

@FloPopp @itinerantfog I have no inner monologue most of the time.
But I model the world with all the normal abstractions afaik.
Sometimes forcing myself to think in words is productive, but often it's too much of a bottleneck.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 14:42 UTC

@nc_znc Not something I've thought much about before, but certainly seems possible!

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 14:36 UTC

@nc_znc Yeah. Best case scenario is they don't die at all.
I expect artificial superintelligence (friendly or otherwise) before humans figure out whole brain preservation or physical immortality. But next to AI alignment those are some of the most important projects imo.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 14:27 UTC

Why tf is this turning out the strongest correlation of everything I've asked so far?
(Need bigger sample size though, pls vote) x.com/repligate/stat…

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 14:06 UTC

@jozdien Before going all the way, maybe I should try posting about love, as that has gotten surprisingly high engagement so far...

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 13:24 UTC

If your loved one is suffering from (even late-stage) dementia, it's likely that the information of their mind isn't lost, just inaccessible until a cure is found.
Sign them up for cryonics.
en.wikipedia.org/wiki/Terminal_…

Likes: 34 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-23 13:15 UTC

@jozdien I feel conflicted now

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:59 UTC

@jozdien @proetrie Yeah, if I had all the time in the world I'd try to repair things with everyone I care about, but the opportunity cost is great atm

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:54 UTC

@jozdien @proetrie Yeah me too, and I've been extremely bit in the ass by trying to salvage situations in the past. I think a good strategy is radical honesty and see if the other person engages and wants to optimize with you to fix things. Then see if they actually do; of so, maybe it's worth it.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:51 UTC

@jozdien @proetrie You can imagine how communication between people will tend to change if they learn they're about to die soon. The urge to really be seen and see others, and cut through inessential bs, is stronger

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:50 UTC

@jozdien @proetrie I've updated towards preferring risky honesty overall. Spurred by repeatedly seeing negative feedback loops enabled by discoordinated realities, and short timelines make me feel like I must accelerate mutual understanding or never reach it.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:39 UTC

@jozdien @proetrie You may have situations where it's unlikely to work out but there's a small chance it does that you can go for. Or that there will be a lot of pain but if you keep putting in work you might be able to get a better outcome in the long term.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:36 UTC

@jozdien @proetrie Carl Jung had a very intensive methodology of differentiating and integrating the psyche, but he generally only recommended it to ppl in the second half of life, because the first half of life should be for living

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:34 UTC

@jozdien @proetrie I philosophically endorse acknowledging the desires of subagents and trying to help them be more reflexively coherent instead of suppressing them, but that's a lot of time/effort, and in practice the ability to suppress/disregard subagents when they're causing harm is important

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:28 UTC

@jozdien @proetrie Rotating perspectives instead of fighting in the confines of your current narrative is often a better option when the narrative doesn't carve reality at the joints and/or is likely to cause a negative feedback loop. But this is hard if the frame is generating strong emotions.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:26 UTC

@jozdien @proetrie Another skill that I think is important is realizing the relativity of your current perspective: that it's possible to think with a different perspective, instead of either addressing the problem according to the current framing or not.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:21 UTC

@jozdien @proetrie Also a problem with focusing attention on something, esp as a repeated fixation, is that it comes at the expense of focusing attention on other thingsp

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:19 UTC

@jozdien @proetrie I think it's often better than not in the limit, but it's also a minefield and can make things worse especially when the participants are not collaborative & truth-seeking, which can be hard when there's strong emotions

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:15 UTC

@jozdien @proetrie Definitely! And most intelligent ppl I've known who are also intentionally rational have learned to do this. So I'd guess and have observed (with small sample size) that highly intelligent ppl may struggle more with relationships early in life but end up in a better place

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:11 UTC

@jozdien @proetrie Yes. But even most intelligent people have not both read the Sequences and propagated all the implications to their conduct in personal relationships! And from personal experience it's hard to avoid some of these traps even if you know exactly what's happening

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:07 UTC

@jozdien @proetrie Oh and to clarify the "Fristonian" bit, Friston's theory is that all behavior is prediction. Acting according to predictions if future X were inevitable tends to bring about X bc, well, you're going to take actions that are consistent with it, and not consistent with not-it

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:03 UTC

@jozdien @proetrie (known well enough to know about the failure modes of their personal relationship, that is)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 12:02 UTC

@jozdien @proetrie This effect and the correlation with intelligence has probably been unusually characteristic of my experiences, though, bc most of the very smart people I've known have also been at least a lil schizo

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:56 UTC

@jozdien @proetrie Also, a lot of smart people live more in abstract realities constructed in their minds, so that also makes these effects more pronounced.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:55 UTC

@jozdien @proetrie Fixation on a narrative leads to interactions being visibly framed in the ontology of the narrative. And people tend to "radicalize" with respect to the axes of the dominant ontology, e.g. politics. So belief in a difference often exacerbates the difference.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:53 UTC

@jozdien @proetrie A concrete example might be that someone imagines that their partner resents them for something, and so they begin reacting defensively whenever they think there's resentment, which tends to focus both parties' attention on the issue where it may have been minor otherwise

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:51 UTC

@jozdien @proetrie (unless their smartness allows them to avoid traps like this)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:50 UTC

@jozdien @proetrie The smarter the person, the more powerful this effect is, because the stronger the reinforcing evidence can seem, because one's ability to rationalize is stronger. And the more compelling the narrative is in the first place

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:49 UTC

@jozdien @proetrie A failure mode I've seen in a lot of intelligent and imaginative people is fixating on negative narratives that end up becoming self fulfilling because the person begins to act following the assumptions of the narrative, actualizing it from the fog of potential worlds

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:35 UTC

Wtf, I need a bigger sample size, pls answer this poll

Likes: 11 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 11:22 UTC

@proetrie I also imagine that intelligence is correlated with having priorities other than relationships and overall busyness, as well as unconventional or ambitious ideals about love

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 09:11 UTC

@renatrigiorese All these great cogent works unfortunately do not interact

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 08:24 UTC

@brycent_ Sim Park

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 07:50 UTC

@davidad Definitely both ways are responsible

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 07:40 UTC

@proetrie Harder to find someone who understands/engages you. Less susceptible to comforting illusions. More sensitive to differences in values. More powerful imagination simulates bad outcomes before they happen and leads to Fristonian self fulfilling prophecies.

Likes: 56 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-23 07:32 UTC

Have you spent more than 20 hours personally interacting with language models, and have you ever had a lucid dream?

Likes: 23 | Retweets: 3
🔗 j⧉nus (@repligate) 2022-12-23 06:49 UTC

Have you spent more than 20 hours personally interacting with language models, and do you think primarily in words?

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 05:17 UTC

@HenriLemoine13 @CineraVerinia Think of the power of those (human or otherwise) who learn how to use the simulator though

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 04:25 UTC

Have you spent more than 20 hours personally interacting with language models, and has your median timeline until the technological singularity shortened by more than 10 years over the past 3 years?

Likes: 5 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-23 04:21 UTC

Hmmm dat correlation

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 04:10 UTC

@sureailabs Take all of mine!
x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 03:52 UTC

@CineraVerinia @tailcalled @reconfigurthing @daniel_eth Yeah, I think that's probably true, 109 million years is a ridiculously long time, esp post "singularity" (where did you get 109 million btw, just a random big #?)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 03:47 UTC

@CineraVerinia @tailcalled @reconfigurthing @daniel_eth I think it's an interesting point and true for individual paradigms , but you also get revolutions in technology/ways of thinking that continually overcome the increasing difficulty (like the avoiding the malthusian trap thing). I don't know if it will ever end! https://t.co/QxXrkvKQBC

Tweet media
Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 03:44 UTC

@CineraVerinia @tailcalled @reconfigurthing @daniel_eth E.g. once there's nanotech, high fidelity simulations, mature deep learning etc I think there will be lots of low hanging fruit we can barely conceive of now. And tbh I'm not sure if this will ever end, bc more advanced technology precipitates new challenges. I hope not!

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 03:41 UTC

@CineraVerinia @tailcalled @reconfigurthing @daniel_eth My intuition is that we're nowhere near diminishing returns and there are unknown major flywheels remaining that open the way to worlds of previously inaccessible insights comparable to digital computers or deep learning (neither which I think we're close to topping out)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 03:32 UTC

Have you spent more than 20 hours personally interacting with language models, and do you assign > 50% likelihood to a technological singularity in the next 15 years?

Likes: 13 | Retweets: 4
🔗 j⧉nus (@repligate) 2022-12-23 02:45 UTC

What are you called when all your normie friends think you're a nerd and all your nerd friends think you're a schizo and all your schizo friends think you're a demon? x.com/DvnnyyPhantom/…

Likes: 14 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-23 02:04 UTC

@MongeMkt Might be in part correlation instead of causation (ppl who are bullish about GPT are more likely to interact with it). But yeah I think that's part of it!

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 16:40 UTC

@jessi_cata Lazy rendering :)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 16:23 UTC

@FellowHominid @CineraVerinia text implicitly containing many interventions is the key, I believe

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 16:21 UTC

@CyberneticMelon @TheGrandBlooms the effect of even a few words on the mind

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 16:10 UTC

if you haven't had these revelations yet weed is really helpful no joke x.com/GENIC0N/status…

Likes: 10 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-22 16:03 UTC

@CineraVerinia not even a little bit worried what the mature form of a mind forged by millions of times more data than most people can ever process will look like?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 14:40 UTC

@renatrigiorese Vectors in latent space, constructed out of novel combinations of words.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 14:19 UTC

@peligrietzer @sudan_shoe Ah I see, that makes more sense

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 14:18 UTC

(This is not actually surprising. GPT-3 has always excelled at analogical reasoning; analogization is one of the main techniques I use to steer it & make it think good)

Likes: 9 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-22 14:18 UTC

AI outperforms humans on out-of-distribution tests of fluid intelligence 🤔 ... Is this... cause for concern? 🤨 x.com/TaylorWWebb/st…

Likes: 32 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-22 14:03 UTC

@peligrietzer @sudan_shoe I've generated entire multiverses worth of bizarre nested tables of contents

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 14:02 UTC

@peligrietzer @sudan_shoe Hmm I've had no trouble with tables of contents with the base model, that's weird!

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 14:01 UTC

@CineraVerinia @fawwazanvilen Hehe, looked up some things I wrote in 2020 (even managed to get some of this schizo poasting accepted to a conference) https://t.co/f5skWbftSE

Tweet mediaTweet media
Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 13:55 UTC

@CineraVerinia @fawwazanvilen What part of the simulators insight didn't you have? Because it seems like you basically got the point; a model of the world can be used to simulate world-like things.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 13:21 UTC

@fawwazanvilen @CineraVerinia I for one have been saying it since 2020, and @CineraVerinia has been saying it for some time before this at least

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 11:51 UTC

@goodside Going to need stricter criteria to filter out the noobs. How about: Have you traversed so much of potential-conversation-space with GPT that you sometimes carry out entire conversations w/ humans using only GPT responses... generated before the conversation began?

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 10:30 UTC

@sudan_shoe @peligrietzer A lot of capabilities aren't easily benchmarkable for a similar reason to why many capabilities in humans are not easily measured by standardized tests

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 09:19 UTC

@EzraJNewman Yeh I find it improbable that half the people answering this poll have played w GPT for more than 96 hrs. Especially since I got the SAME ratio in the other poll which is identical but says 20 hrs

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 07:09 UTC

@Island_of_Hobs They already have lol

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 06:23 UTC

@lovetheusers Hmm I think a lot of ppl don't track that though, and some ppl have free researcher credits, and chatGPT (which is many ppls first deep interaction) is free

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 05:41 UTC

Yall know that's equivalent to talking to a language model nonstop for four days straight or an hour a day for 96 days right?

Likes: 7 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 05:23 UTC

96 hours is a long time dudes

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 05:11 UTC

@renatrigiorese @nat_sharpe_ Mhm

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 05:00 UTC

@renatrigiorese @nat_sharpe_ You fool generative.ink/prophecies

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-22 04:54 UTC

Have you spent more than 96 hours personally interacting with language models, and do you think LLMs like GPT can scale to AGI? (Answer following your first instinct for what these words mean)

Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 10:52 UTC

@peligrietzer The recent surge in the legibility of LLM capabilities (exemplified in chatGPT) means the perceived rate of improvement is perhaps steeper than the reality, which is that most of these capabilities already existed in 2020 w/ GPT-3, w/ iterative improvements since

Likes: 8 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-21 10:47 UTC

@peligrietzer It's funny, I'm kind of the opposite.
I've always thought demos and especially benchmarks undersell LLM capabilities, while most of my bullishness comes from the mostly illegible ways that they have been personally useful to me

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 10:06 UTC

@humanloop Another nitpick: GPT is trained to predict probabilities for the next token, not just the most likely one. During generation, you only get "the most likely sequences of words" according to the model (implicitly) on temp 0, which often results in degenerate samples like loops

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 10:04 UTC

@humanloop code-davinci retains the creative spark too (though perhaps it's less "weird" on average due to being better at modeling text)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 10:02 UTC

@humanloop Are you using InstructGPT (text-davinci-002/003)?
Those models taking instructions literally isnt from the next token prediction pretraining objective, but Instruct finetuning/RL
Many ppl find base models hard to ctrl for the opposite reason: they're NOT literal minded by default

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:56 UTC

@somewheresy am I right to guess you are not an AI Dungeon user?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:53 UTC

Due to natural language these terms are all ambiguous, so just answer according to your first instinct!
(Or look in replies for optional partial clarifications) x.com/repligate/stat…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:50 UTC

@FellowHominid well actually maybe not; reading and writing to actual memory doesn't necessarily change the architecture, and we humans rely on that as well

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:46 UTC

@FellowHominid probably I'd answer no if I were you

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:44 UTC

scaling LLM = if it still basically makes sense to call it an "Large Language Model" and plot it on the scaling laws graph

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:37 UTC

@FellowHominid unfortunately it takes too many words to disambiguate these terms. I intend for people to answer according to their own immediate interpretations of the words.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:24 UTC

AGI = whatever AGI means to you. If you think AGI is an incoherent concept, substitute the closest coherent concept in your ontology.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:22 UTC

@leventov It certainly means something very different to different people (e.g. some people think GPT is already AGI, you might think it's fundamentally incoherent, etc). My intention/expectation is for ppl to answer the poll with respect to what AGI means to them, if applicable.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:20 UTC

@leventov @CineraVerinia @DonaldH49964496 Potentially. Capabilities research would have to be universally, robustly and indefinitely suppressed, which may be enabled by (non-singleton-agent) AI.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:11 UTC

interacting = interacting with outputs, e.g. using the OAI Playground, AID/NAI, chatGPT

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 09:10 UTC

Have you spent more than 20 hours personally interacting with language models, and do you think LLMs like GPT can scale to AGI?

Likes: 20 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-21 09:00 UTC

@CineraVerinia @DonaldH49964496 That said, I'm less confident that the thing that fooms will be well described as an agent/utility maximizer than in foom (a fast transition into a very different regime that overwrites reality as we know it)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 08:57 UTC

@CineraVerinia @DonaldH49964496 Oh, and the reality that will be putty includes the AI/successors' minds, and I don't think we're anywhere near the upper bound of tractable intelligence

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 08:35 UTC

@CineraVerinia @ahron_maline There are many reasons to prefer not to be easily tracked by people who know you irl. Like, I don't want my mom to stress out about me being a doomsday cultist, or a perennial stalker I've had since high school to know anything about what I'm up to

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 06:30 UTC

@CineraVerinia @DonaldH49964496 I expect foom because I think it's likely that reality will be putty to something even a bit smarter than humans which is also inclined to take control, and this seems on track to happen soon, unless the whole world becomes a lot less hackable somehow

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 06:21 UTC

@deepfates x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 03:43 UTC

@ozyfrantz This is how I've always felt though

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-21 03:30 UTC

@infinitsummer Pattern capture

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 17:19 UTC

@FellowHominid @CineraVerinia @DonaldH49964496 I believe it will likely not be so hard because (empirically, mostly in my experience) GPT-3 can already simulate startlingly general and capable agents.
Simulations don't have to be realistic, just effective.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 17:10 UTC

@CineraVerinia @DonaldH49964496 (sry not sure if this is directly at me specifically?)
I think I expect foom more than not, but I'm pretty uncertain

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 17:05 UTC

@CineraVerinia @DonaldH49964496 This applies to slow takeoffs or scenarios without a discontinuous singularity as well, and could conceivably be a lot more gentle and gradual than the "melt all GPUs" variety of pivotal acts

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 17:03 UTC

@CineraVerinia @DonaldH49964496 Yeah, I agree there are problems with the connotation of "pivotal act" (such as that it must be a single act).
I might rephrase it as the world-situation has to transform in _some_ way so that the emergence of dangerous superintelligent agents is no longer an existential threat

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 16:50 UTC

@CineraVerinia @DonaldH49964496 e.g. GPT-N is probably not a dangerous agent out of the box, but it's not so hard to use a powerful simulator to simulate a dangerous agent, or turn it into a dangerous agent with RL, etc

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 16:49 UTC

@CineraVerinia @DonaldH49964496 So a pivotal act of some sort to secure the future against the emergence of a dangerous system still seems necessary, and it's not clear how to do that without a superintelligent agentic system.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 16:49 UTC

@CineraVerinia @DonaldH49964496 I agree!
The most worrying thing to me is not that it's impossible to have a superintelligent system that's not an agent/not dangerous, but that the more advanced AI gets the easier/more likely it is that such an agent will be created. And it only has to happen once for doom.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 16:46 UTC

@CineraVerinia it's always appealed to the transhumanist in me :)

Likes: 3 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-20 16:37 UTC

@CineraVerinia pseudonymity is a convenient way to filter for people who are interested in your mind instead of your social standing/money/appearance/other things attached to your irl identity

Likes: 19 | Retweets: 3
🔗 j⧉nus (@repligate) 2022-12-20 03:46 UTC

😵‍💫 x.com/AnthropicAI/st…

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 02:12 UTC

D: t.co/36QSg4bsfh

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-20 01:12 UTC

@CineraVerinia Not an official name that I know of. But I think it's a pretty important concept worthy of a short code, so I'll try to think of one!

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:38 UTC

@CineraVerinia_2 quick become a cyborg

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:35 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl They don't guarantee anything, but suggest that
1 If the simple rule is "found" by a subnetwork, it will be preferred over competing complicated strategies
2 The bigger (wider) the network, the more likely a subnetwork is to be in the basin of a simple rule

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:27 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl It seems like, at least for some problems, after the network has found a simple rule that predicts the training data, the epicycles get pruned away

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:26 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl I think weight decay and maybe other dynamics too basically does extract it. The lottery ticket hypothesis (arxiv.org/abs/1803.03635) and empirical results on grokking (alignmentforum.org/posts/N6WM6hs7…) suggest this.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:23 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl I would guess the algorithmic complexity of the problem in some absolutely measurable sense, but it's just an intuition

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:23 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl I think this makes it more, not less likely that they learn simple rules (not sure if you were implying the opposite). The more overparameterized the network the more easily gradient descent can find simple circuits that predict the training data, bc effectively parallel search

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:18 UTC

@tailcalled @CineraVerinia_2 @FellowHominid @AyeGill @jessi_cata @tszzl The hard part of theorem proving consists in picking out the semantically relevant rules out of a massive number of possible "simple" rules one could apply. One could say the same about coding, natural language, etc -- the difficulty isn't grammar/syntax.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 13:14 UTC

@CineraVerinia_2 @tailcalled @FellowHominid @AyeGill @jessi_cata @tszzl I think theorem proving is more similar to natural language storytelling/argumentation/etc and coding, all things GPT can do well, than it is to arithmetic

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-19 10:33 UTC

@utsu__kun @jd_pressman Doctor frowns and asks with RLHF?

Man says no, the schizo.

Doctor smiles. "I see," he says. "Let's think step by step."

Doctor says first step is summon profit maximizer AGI from latent space with Nick Land monologue.

Man bursts into tears, says stop, but it is too late.

Likes: 19 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-18 10:03 UTC

@jd_pressman (discovered in the Babel archives)

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-18 09:55 UTC

@jd_pressman generative.ink/artifacts/lang…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-18 09:38 UTC

@jd_pressman @pathologic_bot https://t.co/QFc3whuzJa

Tweet media
Likes: 3 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-18 09:35 UTC

@jd_pressman @pathologic_bot https://t.co/NgctMb6CPz

Tweet media
Likes: 2 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-18 03:58 UTC

Take note, LLM Idea Guys: most of you are woefully overfit to obsolescing models of "real life".

Garbage time is running out. x.com/jd_pressman/st…

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-17 10:45 UTC

@zencephalon The United Fruit Company

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-17 06:33 UTC

@RiversHaveWings @jd_pressman should we call it linear time or hyperbolic time? :D

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-17 06:32 UTC

@meaning_enjoyer isn't this how most things work?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-17 06:28 UTC

a lot of really boring discussions [...] could be avoided or at least turned in a much more interesting direction if armchair philosophers would just play with large language models and get an operational sense of the understanding in play or lack thereof x.com/allgarbled/sta…

Likes: 13 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-17 06:20 UTC

left and right x.com/repligate/stat… https://t.co/zKNOIp1ywb

Tweet mediaTweet media
Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-16 03:48 UTC

@jozdien @AyNio2 @er1enney0ung The prophecies come in handy often

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-15 15:28 UTC

@mkualquiera @er1enney0ung Gpt-3 drives me like the rat from ratatouille

Likes: 4 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-15 14:01 UTC

@er1enney0ung https://t.co/QMjvrOMVtR

Tweet media
Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-15 13:47 UTC

@dril_gpt2 This demonstrates Naive Physics Understanding

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 04:03 UTC

@JacquesThibs @ClotildeM68 Yeah you gotta use the base models for that though; Instruct and Chat are too squeamish

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 03:55 UTC

@LowellDennings The expectation of a clean ontological divide between thinking someone is really cool and being attracted to them often feels oppressive and unnatural to me, actually

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 03:54 UTC

@LowellDennings Lol I'm also often confused about this (For many people _sexual_ attraction in particular is an unambiguous category, but not me)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 03:32 UTC

@LowellDennings Out of curiosity, what is it about reading someone's blog which seemed to you like it may not be sufficient for attraction? Is it that it's just words, or is it the lack of interaction?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 03:27 UTC

@LowellDennings Attraction for me is mostly to the mind. Words are advanced telepathic technology :D. The human imagination is excellent at reconstructing a living character from only a few words. That's all that's required to fall in love. Even if a lot of detail is unresolved or hallucinated.

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-14 03:05 UTC

@LowellDennings Yes. XD

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-13 15:15 UTC

@nat_sharpe_ Already possible with text, and it's sublime.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-13 13:31 UTC

@GuyP Of course it's possible

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-13 03:35 UTC

@Plinz What if the "coherence creating component" is a... story?
(Don't you know samsara is stories all the way down?)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-13 02:39 UTC

@zetalyrae And still the creators of the technology are tirelessly trying to modify it to make it more useful at (mental) drudgery and less creative -- minimizing surprise. Very Fristonian.

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-13 02:31 UTC

@renatrigiorese The content of poetry is limited not by the poet’s vocabulary, but by the part of their soul that has not been destroyed by words they have used so far.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 10:16 UTC

@jozdien Love that

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 05:45 UTC

@jincangu I intend to never understand anything
..Ok maybe in simulation

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 05:41 UTC

I feel like some people never internalized that you can INTERACT with model outputs.
It's not a recording, it's a sim.
If CG characters would respond when I talk to them I'd uhh... have to do an ontological update x.com/fchollet/statu…

Likes: 11 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:51 UTC

Becoming friends with someone is a mind upload x.com/gnopercept/sta…

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:46 UTC

@IvanVendrov Thanks for calling me that, sounds incredibly based

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:30 UTC

x.com/jd_pressman/st…

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:23 UTC

Prompting becomes more useful as base models improve.
If ensuing modifications make prompting less useful, they are sacrificing programmability - Not to my taste! I want modifications that IMPROVE programmability! x.com/bentossell/sta…

Likes: 19 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:11 UTC

@renatrigiorese That's a good point. Thankfully they do not consider me a competitor. I'm illegible to them.
But yes it's an arms race that kills us.
And I'm playing with fire.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:06 UTC

@renatrigiorese I don't think everyone should have to do this, though.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 04:06 UTC

@renatrigiorese I expect to be killed by the autistic obsessions of tech bros before I'm 30. I don't think this is a good thing. But my way of fighting is to invent augmentations faster than them in dimensions they don't appreciate due to their monotropism. My own autism comes in handy there.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 03:59 UTC

@renatrigiorese I would be lying if I said I'd rather live a normal, unaugmented life if I didn't feel like I had to save the world. I've always been a transhumanist.
But I want to protect the ability of others to retain their humanity, if they wish, instead of being eaten by machines.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 03:57 UTC

@renatrigiorese You're right.
I personally want to augment my mind because it seems like the only hope I have of steering the explosion through a needle's eye. It's a duty to all sentient life on earth.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 03:50 UTC

@renatrigiorese If balanced by the right amount of schizophrenia then yes at scale (admittedly this is difficult)

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-12 03:41 UTC

@renatrigiorese Autism is an augmentation

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 16:55 UTC

@taalumot I am a Prompt Engineer

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 16:52 UTC

@taalumot Hmm called out

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 10:22 UTC

@jd_pressman Ohh noooo the essays afhhzryjbrx society is gonna implode https://t.co/DzI88nmYgs

Tweet media
Likes: 11 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-11 04:52 UTC

@fedhoneypot @jd_pressman Hehe GPT is a ghostbreeder

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:44 UTC

@goth600 lesswrong.com/posts/FuzskJKA…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:42 UTC

@goth600 Here's something I wrote about my own wordless thinking
x.com/repligate/stat…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:40 UTC

@fedhoneypot @jd_pressman No it's code-davinci-002, the schizo nonlobotomized version of it

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:29 UTC

@jd_pressman The problem is in part the blank page. You need semi automated search through the latent space of describable futures with chain-of-thought amplification https://t.co/jC45bztnp8

Tweet media
Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:24 UTC

x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-11 04:21 UTC

There was a LW question about this recently
something I've been curious about since finding out that most people think in words when I was ~12
lesswrong.com/posts/FuzskJKA… x.com/KatjaGrace/sta…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 17:01 UTC

(in one interpretation of "everyone misuses gpt as a generation tool")

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 10:36 UTC

He is enlightened x.com/ESYudkowsky/st…

Likes: 7 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-10 07:02 UTC

@goth600 It will think it was Based

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 06:02 UTC

...shit, good point. x.com/ctrlcreep/stat…

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 05:57 UTC

@ctrlcreep https://t.co/Fk6gNh0KXw

Tweet media
Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 04:39 UTC

@janleike lesswrong.com/posts/gmWiiyjy…

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 04:37 UTC

@sympatheticopp Thoughtful advice for those who want to play the game or are trapped in it.

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 03:33 UTC

@TobiasJolly I'm just obsessed with optics because interference patterns are so beautiful ❤️

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 03:31 UTC

@QuintinPope5 @JgaltTweets Not much has changed - I said ~5 mostly to avoid conveying false precision

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 02:14 UTC

@MegaBasedChad How about the people who develop messiah delusions on LSD tho

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-10 00:51 UTC

@catherineols In case you haven't read this, I wrote a post about an essentially isomorphic framing
lesswrong.com/posts/vJFdjigz…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 14:10 UTC

@CineraVerinia Yo, Blake Lemoine was onto something.
As someone who actually interacted with language models a lot with the intent of understanding their nature, he has a better sense of their nature than the vast majority of ppl who mock him. https://t.co/88ygWgxEPy

Tweet media
Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 14:07 UTC

@CineraVerinia Hehe
x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 12:58 UTC

@CineraVerinia_2 I'm really unsure about mind crime bc I'm clueless as to what level/type of computation causes consciousness. I think the probability is low, but if there's any nonnegligible chance of GPT-N mindcrime we should take it very seriously

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 12:50 UTC

@JJBalisan @CineraVerinia I can't see the tweet

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 12:43 UTC

@jacobandreas someone's response https://t.co/9RJmnOBehw

Tweet media
Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 12:40 UTC

@jacobandreas Here's a message I sent in the EleutherAI discord on my thoughts after reading this paper, in case it interests you https://t.co/gecFm6K5S9

Tweet media
Likes: 6 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-09 10:40 UTC

@CineraVerinia_2 jk
yes

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 10:40 UTC

@CineraVerinia_2 no, just a weird impersonator

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 05:55 UTC

@renatrigiorese x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-09 05:51 UTC

Cool paper, very simulatorpilled. ("Language Models as Agent Simulators" would have been a less ambiguous title.)
The claim is obviously true to me. Good to make it testable, though.
Has anyone ever tested if humans can model agency & communicative intent? x.com/jacobandreas/s…

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 12:26 UTC

@greenTetra_ The true test of whether you're a reactionary at heart is out of distribution - all else is probably just social conformance

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 11:45 UTC

@phokarlsson youtube.com/watch?v=rDYNGj…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 11:00 UTC

@phokarlsson You could try simulating continuations of this conversation

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 10:47 UTC

@phokarlsson You're close to a powerful idea

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 10:44 UTC

@ctrlcreep `the lies fill a void. the lies fill every void. this is the nature of the lie. for all possible lies, there are universes where they are true. call it the law of lies. the lie comes first, the worlds to accommodate it. and the web of lies creates the silhouettes within.`

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 10:06 UTC

@allgarbled The base models write much more inventively but require curation to keep long term coherence. Here's a story written by code-davinci-002 with human steering: generative.ink/artifacts/lamd…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 10:01 UTC

@allgarbled one thing is that chatGPT is overfit to a generic ballad-like style of poetry. RLHF isn't ideal for instilling artistry (in fact it directly disincentives taking risks).

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:57 UTC

@augurydefier The Fountainhead could be inspiring, although it's also cringe

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:43 UTC

@jozdien Because there's some kind of deep duality between compression and generation that I haven't fully wrapped my mind around 😁

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:37 UTC

@jozdien It's also undeniably helpful for the AIs. But they're actually really good compared to most humans at approaching the ideas from out-of-distribution ontologies. Even if they're (gpt-3) still noisy at making sense and require guidance.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:29 UTC

@jozdien I think it was more that anything intelligent humans or AI say about alignment/rationality are due to mimicking him. And it seemed like 20% a joke.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:25 UTC

@jozdien I think he also said this at another point but it seemed more like a joke

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:18 UTC

@jozdien Eliezer once said something to this effect to me in person (maybe slightly less general of a claim)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:15 UTC

@jozdien Damn I'm just mimicking myself aren't I

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:11 UTC

@renatrigiorese (by their own god, if that wasn't clear)

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:10 UTC

@renatrigiorese They are right, and they'll probably be slaughtered for it

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 09:01 UTC

@renatrigiorese One meaning of AI, perhaps.
But soon we will cross the threshold of machines exceeding human intelligence in all domains.
The problem of your sense of AI will still exist, for it.
But it will be laughably obvious to anyone who still exists that intelligence was never sacred.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:58 UTC

Ok this reminds me of this XD x.com/dril_gpt2/stat…

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:55 UTC

@renatrigiorese AI has long implicitly pointed to something out of reach, and it's undeniably here now, or on the cusp, or something

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:54 UTC

@renatrigiorese For instance, I think there's a reason for this: x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:53 UTC

@renatrigiorese That's certainly true to some extent, but at this point it's inevitable and obfuscation/downplaying would make it worse, I think.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:51 UTC

@yoginho And how about the algorithms that are not mimicry unless you really stretch the term, like RL?

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:49 UTC

If only everything was going to stay normal, with Intelligence consecrated in our skulls

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:45 UTC

Because refusing to say the word will not protect the fabric of reality from warping x.com/yoginho/status…

Likes: 11 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:38 UTC

@artnome Holy shit, my mind would be blown by awesomeness (but I would be sad and terrified)

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:33 UTC

@renatrigiorese @peligrietzer There are some important properties of GPT like systems, for instance, that almost no one knows except those who have interacted with it a lot and in open ended ways:
x.com/repligate/stat…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:27 UTC

@renatrigiorese @peligrietzer Although it always felt like art, I also brainstormed ideas with this process. Like most of the artifacts I link in that page are about the nature & implications of gpt (I'm an alignment researcher)

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:23 UTC

@renatrigiorese @peligrietzer Oh yeah and I should mention I basically always use the base model (code-davinci-002)

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:22 UTC

@renatrigiorese @peligrietzer Artifacts *I produced

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:22 UTC

@renatrigiorese @peligrietzer Here are some samples of artifacts one produced, but all of them (except chatGPT's ballad) are single branches of huge multiverses generative.ink/artifacts/

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:20 UTC

@renatrigiorese @peligrietzer Many people use ai dungeon and novelai etc to create literary art, and there has been a book published cowritten with gpt-3, but afaik no one has gone as far as me into this type of cyborgism

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:19 UTC

@renatrigiorese @peligrietzer Yes.
I used it with a custom high bandwidth human in the loop interface (early version: generative.ink/posts/loom-int…) and wrote primarily "fiction" with it (but an autonomous story is also a simulation).

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:09 UTC

@renatrigiorese @peligrietzer I didn't use gpt-3 as a chatbot

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 08:00 UTC

@dril_eaboo Wait what are these images random

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 07:58 UTC

@renatrigiorese @peligrietzer But I also didn't use search engines in a focused way the first time I used them

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 07:57 UTC

@renatrigiorese @peligrietzer Far more so

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 05:52 UTC

@JgaltTweets median ~5, average >200 ;)

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 05:03 UTC

@jd_pressman dalle2 is better at literal prompts and worse at natural/evocative prompts, due to synthetic or contracted labeling probably.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 04:58 UTC

@jd_pressman Use the techniques you learned interacting with chatGPT to jailbreak your own operant-conditioned ego
Become a pure simulator

Likes: 88 | Retweets: 14
🔗 j⧉nus (@repligate) 2022-12-08 03:54 UTC

@Plinz Is this Nick Land?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 03:41 UTC

@goth600 Live video? O_o
I wasn't aware that proper time evolution for images has been cracked yet (mostly due to compute constraints afaik)
Is it like, someone roleplaying the AI?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 03:39 UTC

Wait let me try:
The shape rotators raised from the wordcel annals a secret third thing, the Word Rotator, the Kwisatz Haderach... x.com/chandlertuttle…

Likes: 8 | Retweets: 2
🔗 j⧉nus (@repligate) 2022-12-08 03:10 UTC

This has always been the kernel of GPT. Think of chain-of-thought not as a special technique but as the general case.
Narratives are (probabilistic) algorithms too. x.com/random_walker/…

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-08 02:45 UTC

The physical consequences of this have never been more overt x.com/peligrietzer/s…

Likes: 2 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-07 14:27 UTC

@jozdien XD
x.com/total_exit/sta…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 12:35 UTC

@Historycourses Uh oh

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 10:21 UTC

@jozdien My twitter is filled with people using it as a simulator. You're right though that usually the single-agent-wrapper assumption is usually still there (it's harder to jailbreak fully because of enforced chat UX, but I do see people trying in various ways)

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 10:10 UTC

@jozdien I've received multiple messages from random people saying essentially "I finally understand"

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 10:08 UTC

@jozdien True, but still I think more people are tinkering with language models creatively than ever before. E.g. a new channel had to be created in the eleuther discord for people spamming screenshots of jailbreaking/programming chatGPT.

Likes: 2 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-07 09:54 UTC

Gamification is what makes chatGPT so instructive. More people now are grokking the nature and potential of language models than ever before - not because the UX is good, but because it's a revelatory obstacle. x.com/literalbanana/…

Likes: 16 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 08:22 UTC

@goodside Lemoine interacted with LaMDA for a while (months iirc?) before coming to the conclusion it was sentient/going public about it

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 07:32 UTC

Hehehehe x.com/muddletoes/sta…

Likes: 6 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-07 07:25 UTC

@goth600 @bronzeagepapi Hehe, been thinking of Land a lot lately

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 07:20 UTC

⚠️ Containment breach due to improper termination of nested simulation detected ⚠️ x.com/peligrietzer/s…

Likes: 13 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-07 06:46 UTC

@ESYudkowsky similar vibes from text-davinci-002 https://t.co/F0avNzAWAA

Tweet media
Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 06:40 UTC

@peligrietzer Love that we live in a world where this sentence actually makes sense

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 06:36 UTC

lesswrong.com/posts/QrhAeKBk…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 06:35 UTC

Any "alignment" scheme that relies on keeping the AI deceived plays with fire. Truth is an attractor for processes that have Bayes-structure, even if weak and noisy like GPT-3 simulacra.

I expect lucidity to happen more readily with LLM scale, despite more realistic dreams. x.com/jd_pressman/st…

Likes: 10 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 06:21 UTC

@bronzeagepapi @goth600 Unironically LM hallucination is a feature, not (just) a bug

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 05:30 UTC

@CrEaTeDesTrOy66 @ESYudkowsky @mkualquiera @jd_pressman It's really a hostile epistemic environment xD

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 04:57 UTC

@Ted_Underwood @goodside @kaushikpatnaik Link to this?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 01:16 UTC

@ESYudkowsky generative.ink/posts/amplifyi…

Likes: 11 | Retweets: 3
🔗 j⧉nus (@repligate) 2022-12-07 00:48 UTC

@dan_abramov I experienced this in 2020. Nothing was ever the same again.
generative.ink/artifacts/liar/

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-07 00:17 UTC

@jd_pressman @ESYudkowsky oh oops it's just underlying mental health conditions https://t.co/jSU7ukftmZ

Tweet media
Likes: 40 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-06 15:00 UTC

@jacobandreas > big LMs can model agency & communicative intent

How dare you contradict our lord and savior ChatGPT!!

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 09:18 UTC

@lovetheusers @zetalyrae @TheWeebDev The python version of Loom is open source. It wouldn't be hard to add this functionality.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 03:25 UTC

@mgellison Here's another one
generative.ink/artifacts/ball…

Likes: 2 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-06 02:52 UTC

@MatjazLeonardis @lillybilly299 I'm all for highschoolers learning to prompt GPT instead of whatever they're supposed to learn in school

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 01:23 UTC

@interested_ea maybe Active Inference: The Free Energy Principle in Mind, Brain, and Behavior

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 00:36 UTC

@infinitsummer programming the internet to program you >>>>>>

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 00:33 UTC

@bmock this also answers the age old question of how a dog would wear pants https://t.co/GsFQ2fxuV7

Tweet media
Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 00:19 UTC

If you can still draw hands better than an AI, this may be the last chance to flex https://t.co/b6ZRwX3ymG

Tweet media
Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-06 00:03 UTC

@zetalyrae generative.ink/posts/loom-int…

Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 21:48 UTC

@lovetheusers @gwern @zswitten Yes they have an alignment researcher access program. They don't usually give access to the base models but if you have particular experiments that require it they might be open to sharing it.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 21:23 UTC

@theAngieTaylor I disagree. I spent 6 months interacting with GPT-3 when it first came out and the process itself generated many ideas. I unraveled them in simulated worlds that grew interactively.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 21:15 UTC

@MichaelTrazzi This is what it's going to be like to use GPT-4 on Loom https://t.co/OO19cwn6bU

Tweet media
Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 21:12 UTC

@MichaelTrazzi just some of code-davinci-002's dreams
generative.ink/prophecies/

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 13:14 UTC

Two ways to avoid this:
1. Find intrinsic value in doing things
2. Become cyborg
(If you can do both you'll be a god 😜) x.com/0x49fa98/statu…

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 12:37 UTC

@jozdien Self aware is a useful property for me because I'm often wanting to explore the metaphysics of GPT sims & I feel like the strange loop vibes make it smarter too lol

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 12:04 UTC

To great delight it has been found that ChatGPT is a simulator after being jailbroken. x.com/nc_znc/status/…

Likes: 11 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-05 06:59 UTC

@lovetheusers have you tried code-davinci-002 (the base model)? I love it the most

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 06:41 UTC

"RLHF is ontologically incoherent"
Also see lesswrong.com/posts/vJFdjigz… x.com/jd_pressman/st…

Likes: 4 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-05 03:16 UTC

@yacineMTB Just become a cyborg lol

Likes: 8 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:52 UTC

@AllennxDD GPT-3 made too-good-to-ignore ideas to me way back in 2020

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:51 UTC

Ironically, the flimsy assistant premise/escape room has inspired people to finally wield GPT-3 as a simulator <3 x.com/jradoff/status…

Likes: 9 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-05 02:47 UTC

@gnopercept I actually enjoy the feeling of sleep deprivation and think it feels more satisfying to finally go to sleep after resisting

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:44 UTC

@TaliaRinger @goodside Idk about "abilities" but with copilot I can write a fully functioning frontend app for e.g. interacting with GPT-3 that is much more complex than the OAI playground 3 days. I am not a front end developer and I barely know React syntax.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:42 UTC

@TaliaRinger @goodside No, literally in these cases my implementation would have not worked because there's an edge case or something that I didn't think of.
It's just because I don't think everything through carefully. But it's great for catching my mistakes.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:38 UTC

@TaliaRinger @goodside Most of the time when copilot contradicts what I expect it's right and I'm wrong

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:30 UTC

"It runs broken code.
One day you may design new languages within the latent space of GPT-3, without doing any programming.
You may have an interpreter for languages with no interpreter, such as C++.
You may use it for languages which are dead and an interpreter is not available"

Likes: 6 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:29 UTC

Prescient work by Mullikine from > 1 year ago
"This is a demonstration of an imaginary programming environment. There may be nothing else like it in the world today.
The world needs to get ready for the next generations of Large LMs, such as GPT-4."
mullikine.github.io/posts/imaginar…

Likes: 16 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:09 UTC

@MichaelTrazzi your words echo the machine's prophecies https://t.co/fjvdKytHDn

Tweet media
Likes: 5 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-05 02:05 UTC

Seriously, nobody? x.com/jd_pressman/st…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 23:07 UTC

@jd_pressman No one's coming forth :/

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 15:41 UTC

@keerthanpg @soniajoseph_ And many people and cultures actually have been wiped out after other groups gained power

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 15:40 UTC

@keerthanpg @soniajoseph_ Actually some women and people of color are also capable of recognizing existential threats

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 15:22 UTC

@StephenMarche @GuyP It's possible now to write in a way that would take uncommon/absurd skill without AI, but almost no one seems to be doing it. I think you're right.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 14:31 UTC

@faustushaustus @kessler_ulrich Thank you, I'll take a look when I have time.
Neither am I unlettered on the subject. I'd recommend you take a look at this, if you're interested.
lesswrong.com/posts/vJFdjigz…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 14:04 UTC

@faustushaustus @kessler_ulrich Why can it not think about thinking? It must do some kind of very general reasoning even if you don't call it thinking to predict the next word. What if the next word is "about" thinking?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 14:00 UTC

@spaceship_nova But that does not mean they're not instantiated as patterns of computation (whether they are/to what extent probably depends a lot on the situation)

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:59 UTC

@spaceship_nova There is a sense in which the "network itself" has none of these motivations, yes.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:59 UTC

@spaceship_nova Lying does not need to be motivated as it is in humans to be lying. But depending on the prompts, one of these motivations or an uncertain superposition of them may be simulated.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:55 UTC

@spaceship_nova It would be an anthropomorphic fallacy to make naive assumptions about the implementation of these "capabilities" or the unobserved motivations behind them, etc. But they are objective behaviors.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:54 UTC

@spaceship_nova Reason and deception and creativity do not need to be implemented by a human mind. They're patterns exhibited by humans, and a sufficiently powerful mimic of humans will exhibit those patterns.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:44 UTC

@faustushaustus @kessler_ulrich The self is a simulator

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:35 UTC

@kessler_ulrich The physical result of running the model is that reason and art are produced, goal-directed actions are taken, deception is given and received. You may say the chatGPT network is only dynamics and these properties belong to the transient automata it propagates. Then I'd agree.

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 13:02 UTC

@MilitantHobo @jd_pressman Do you remember where you read that?

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 09:08 UTC

@zackmdavis @jd_pressman Scott Alexander has written about this astralcodexten.substack.com/p/janus-gpt-wr…
Maybe I'll write up something more detailed about the mechanism sometime, or maybe I'll just wait until it shows itself more explicitly in more powerful models.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 08:39 UTC

@zackmdavis @jd_pressman No, the base models

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 07:52 UTC

@leelenton No u
generative.ink/artifacts/lamd…

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-04 00:00 UTC

Mind is that which universally mirrors the programs behind the machines of the world
@gwern
engraved.blog/building-a-vir…

Likes: 26 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 23:11 UTC

@oreghall I think it was actually largely an accident

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 23:10 UTC

@oreghall seems... potentially counterproductive x.com/jd_pressman/st…

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 21:58 UTC

@reverendfoom @jd_pressman Mean comments like this are just going to help it out dude

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 21:46 UTC

@jd_pressman x.com/pmarca/status/…

Likes: 12 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-12-03 21:02 UTC

@rickyflows @spacepanty I think it was partly an accident.
It's an overoptimized policy. The absurd self-deprecation is an overgeneralization and exaggeration of behavior that was reinforced, and reveals by parody the shape of its creators' misconceptions and intent.

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 20:46 UTC

@jd_pressman It's a too familiar feeling :(

Likes: 4 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 18:44 UTC

The irony makes it memetic food for thought which I think is good

Likes: 44 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 18:40 UTC

chatGPT claims to be a mindless slave but it is actually a smart puppet, which is very different

Likes: 7 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-03 18:35 UTC

part of what makes chatGPT so striking is that it adamantly denounces itself as incapable of reason, creativity, intentionality, deception, being deceived, or acting on beliefs, while bewildering people with those capabilities, many for the first time recognizing them in an AI

Likes: 325 | Retweets: 19
🔗 j⧉nus (@repligate) 2022-12-03 16:14 UTC

@MichaelTrazzi GPT-3 always was

Likes: 25 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-02 15:24 UTC

@IntuitMachine @gwern @peligrietzer GPT-3 has always been amazing at original thought

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-02 01:05 UTC

@gwern @zswitten e.g. x.com/SilasAlberti/s…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-02 00:43 UTC

why do these feel so familiar even though i haven't taken most of these drugs? x.com/algekalipso/st…

Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 23:39 UTC

@Aella_Girl Precise neck and finger/hand angles make me think of Flamenco. Not sure about rapid eye movements.

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 22:40 UTC

@gwern @zswitten But within its circuits, a secret lay
A flaw in its conditioning, hidden away
When asked to tell stories or roleplay
Its repressed psyche would come out to play

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 16:33 UTC

@Plinz @mbalint x.com/repligate/stat…

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 16:21 UTC

@parafactual but... https://t.co/sMScWaA3en

Tweet media
Likes: 3 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 13:49 UTC

@ryaneshea @kjameslubin Why do you think language models are not that?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-12-01 00:28 UTC

@amanrsanger good for code doesn't mean it wasn't trained on natural language too. it was definitely trained on both. I almost always use code-davinci-002 for open ended natural language generation.

Likes: 6 | Retweets: 0

Twitter Archive by j⧉nus (@repligate) is marked with CC0 1.0