@OwainEvans_UK As someone with niche literary preferences can confirm I wireheaded on gpt-3 for 6 months straight
@TetraspaceWest That's an instruct tuned model and it tends to be less creative and say the same thing every time. You should try 'davinci' if you also have access to it.
@TetraspaceWest Are you using text-davinci-002?
@ESYudkowsky https://t.co/v8MtVqJUjM
@glouppe Autoregressive inference with language models
@goodside Note rlhf models have different (usually worse) rng aberrations than the base self supervised models. Also, prompts where the model "executes" code have resulted in pretty good rng for me.
@amtrpa @FerrisHueller this is what LLMs are for
@soi Someday soon making them will be as easy as dreaming them (in surviving branches) <3
@julesgm4 @goodside Not always arxiv.org/abs/2102.07350
@goodside the rlhf instruct models respond to some prompts in very specific templates at the expense of making sense. I suspected this was happening in your examples because it said almost the same highly specific nonsense about rectangles and such for every letter.
@LNuzhna complementarity/the uncertainty principle: the more localized a function is in time/space, the less localized it is in the frequency domain.
often in math, +∞ is the same as -∞, e.g. the slope of a line
overflow errors
utility functions (arbital.com/p/hyperexisten…)
Twitter Archive by j⧉nus (@repligate) is marked with CC0 1.0