by tcsenpai on 10/11/24, 3:45 PM with 33 comments
by RicoElectrico on 10/11/24, 7:20 PM
Is llama 2 a good fit considering its small context window?
by asdev on 10/12/24, 1:05 AM
by chx on 10/11/24, 10:07 PM
I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input. Like https://i.imgur.com/n6hFwVv.png you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?
by tcsenpai on 10/13/24, 4:55 PM
- # Changelog
## [1.1] - 2024-03-19
### Added - New `model_tokens.json` file containing token limits for various Ollama models. - Dynamic token limit updating based on selected model in options. - Automatic loading of model-specific token limits from `model_tokens.json`. - Chunking and recursive summary for long pages - Better handling of markdown returns
### Changed - Updated `manifest.json` to include `model_tokens.json` as a web accessible resource. - Modified `options.js` to handle dynamic token limit updates: - Added `loadModelTokens()` function to fetch model token data. - Added `updateTokenLimit()` function to update token limit based on selected model. - Updated `restoreOptions()` function to incorporate dynamic token limit updating. - Added event listener for model selection changes.
### Improved - User experience in options page with automatic token limit updates. - Flexibility in handling different models and their respective token limits.
### Fixed - Potential issues with incorrect token limits for different models.
by oneshtein on 10/12/24, 3:44 AM
by donclark on 10/11/24, 7:31 PM