j⧉nus (@repligate)'s Tweets - 2022-08

🔗 j⧉nus (@repligate) 2022-08-31 06:37 UTC

@OwainEvans_UK As someone with niche literary preferences can confirm I wireheaded on gpt-3 for 6 months straight

Likes: 3 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-08-26 21:36 UTC

@TetraspaceWest That's an instruct tuned model and it tends to be less creative and say the same thing every time. You should try 'davinci' if you also have access to it.

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-26 19:38 UTC

@TetraspaceWest Are you using text-davinci-002?

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-26 04:49 UTC

Finally
arxiv.org/abs/2208.04024

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-20 21:17 UTC

@InquilineKea Conjecture: ~90%?

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-20 21:13 UTC

@ESYudkowsky https://t.co/v8MtVqJUjM

Tweet media
Likes: 6 | Retweets: 1
🔗 j⧉nus (@repligate) 2022-08-17 03:46 UTC

@glouppe Autoregressive inference with language models

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-16 19:25 UTC

@goodside Note rlhf models have different (usually worse) rng aberrations than the base self supervised models. Also, prompts where the model "executes" code have resulted in pretty good rng for me.

Likes: 1 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-14 02:27 UTC

@amtrpa @FerrisHueller this is what LLMs are for

Likes: 0 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-12 07:43 UTC

@soi Someday soon making them will be as easy as dreaming them (in surviving branches) <3

Likes: 2 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-12 00:23 UTC

@julesgm4 @goodside Not always arxiv.org/abs/2102.07350

Likes: 9 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-11 14:07 UTC

@goodside the rlhf instruct models respond to some prompts in very specific templates at the expense of making sense. I suspected this was happening in your examples because it said almost the same highly specific nonsense about rectangles and such for every letter.

Likes: 12 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-11 10:04 UTC

@goodside https://t.co/Iyogmo0p8f

Tweet media
Likes: 55 | Retweets: 0
🔗 j⧉nus (@repligate) 2022-08-11 04:13 UTC

@LNuzhna complementarity/the uncertainty principle: the more localized a function is in time/space, the less localized it is in the frequency domain.
often in math, +∞ is the same as -∞, e.g. the slope of a line
overflow errors
utility functions (arbital.com/p/hyperexisten…)

Likes: 1 | Retweets: 0

Twitter Archive by j⧉nus (@repligate) is marked with CC0 1.0