• vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    that is a studied, documented, surefire way to very quickly destroy your model. It just does not work that way. If you train an llm on the output of another llm (or itself) it will implode.