Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
llm_topic_modelling
like
0
Runtime error
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
llm_topic_modelling
/
tools
Ctrl+K
Ctrl+K
4 contributors
History:
73 commits
seanpedrickcase
Updated spaces GPU usage for load_model
a3bd98a
about 8 hours ago
__init__.py
Safe
0 Bytes
First commit
9 months ago
auth.py
Safe
2.43 kB
Code reorganisation to better use config files. Adapted code to use Gemma 3 as local model. Minor package updates
3 months ago
aws_functions.py
Safe
8.62 kB
It should now be possible to load the local model globally at the start to avoid repeated loading throughout the stages of topic extraction
about 10 hours ago
combine_sheets_into_xlsx.py
Safe
16.8 kB
Enhanced app functionality by adding new UI elements for summary file management, put in Bedrock model toggle, and refined logging messages. Updated Dockerfile and requirements for better compatibility and added install guide to readme. Removed deprecated code and unnecessary comments.
6 days ago
config.py
Safe
20.5 kB
Added speculative decoding to transformers calls for gemma 3
about 10 hours ago
custom_csvlogger.py
Safe
12.7 kB
Merge branch 'dev' into gpt-oss-compat
8 days ago
dedup_summaries.py
Safe
51.2 kB
Added speculative decoding to transformers calls for gemma 3
about 10 hours ago
helper_functions.py
Safe
33 kB
Added possibility of adding examples quickly to the input files
5 days ago
llm_api_call.py
Safe
97.8 kB
Added speculative decoding to transformers calls for gemma 3
about 10 hours ago
llm_funcs.py
Safe
53.7 kB
Updated spaces GPU usage for load_model
about 8 hours ago
prompts.py
Safe
10 kB
Optimised prompts for llama.cpp prompt caching
8 days ago
verify_titles.py
Safe
44.9 kB
It should now be possible to load the local model globally at the start to avoid repeated loading throughout the stages of topic extraction
about 10 hours ago