the theory here is that counting the number of sentences generated is
kind of silly, if we're already specifying min/max word counts, we
probably just want to fall into that range, and not really care how many
sentences we get
meanwhile, we were overloading max_sentences to also calculate how long
any one sentence must be, which is kind of a weird thing to derive, so
we're going to drop the max_sentences language and call this more what
it is, a bias towards the number of sentences that might be seen
This reverts commit 464727cc74.
it turns out that without the min_words_per_sentence adjustment, the
default min_words (15) is way too demanding on a lot of chains, so we're
going to go back to this for the moment
this puts additional pressure on the sentence generator, retrying many
times to get something that's long but not too long. only testing on a
small context so far, so this is certainly not yet ready to go live, but
the results are pretty good so far
this tunes things a bit in the sentence generator, trying to favor
complete sentences over the min/max word counts, which are still kinda
heeded but not as militantly. this *should* create more interesting
chains, especially with topics, without really breaking things, but this
certainly needs some testing before we can see if it's actually right
this shouldn't have happened, but i'm guessing some previous crash put
some buggy data into my database, so let's just be careful and do this.
a k1,k2 could have had any value for v, but not knowing what else to do
in this corner case, we'll just use a stop and let the caller decide
if they want to keep going
for convenience, pass a list of events to add_global_regex_handler if
you want to have multiple things fire the same handler. common case is
pubmsg and privmsg
undecided if i'll bother to bring recursion back, but if it works out,
provide this method to either reply or give text to the thing recursing.
either way this is a bit clearer than using privmsg() directly, usually