Used by llama 1/2 models family:
Silly tavern responses cut off. Every single message for the past 20 or so ai replies have all been messed up. Web reply order strategies decides how characters in group chats are drafted for their replies. Web as a last resort, you can try turning on multigen (in the user settings panel), but will make responses come out slower because it's making the ai produce small replies back to.
Yes describe the bug when the st sends it's context to koboldai kai cuts the beginning of the context. Web try this if your prompts get cut off on high context lengths. Pygmalion 7b have you searched for similar bugs?
Web why is silly tavern ai not working or not responding? Is less prone to getting cut as short but it still happens, however what's weird is that chronos 33b is supposed to be in alpaca. Context size how many tokens of the chat are kept in.
Web make your bot responses better with roleplay preset. Natural order tries to simulate the flow of a real human conversation. Relevant newest # murder # murderer # jonathan frakes.
Web if you guys have the best settings for sillytavernai, please tell me! Pick if you use a llama 1/2 model. Web search, discover and share your favorite silly question gifs.
Web describe the bug dialogue with character has broken grammar, mostly missing words such as the, a, etc. Web response length how much text you want to generate per message. Added new setting allow { {user}}: