File size: 1,708 Bytes
0b53a6d
34ab0df
 
 
 
8cd3581
 
34ab0df
 
8cd3581
 
 
 
 
 
 
34ab0df
 
 
24a7f55
8cd3581
24a7f55
 
 
8cd3581
24a7f55
8cd3581
 
24a7f55
 
8cd3581
24a7f55
8cd3581
 
 
 
24a7f55
 
 
8cd3581
 
24a7f55
8cd3581
24a7f55
8cd3581
 
 
24a7f55
8cd3581
24a7f55
8cd3581
 
 
 
 
24a7f55
 
 
8cd3581
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
title: "πŸ€– Agentic Browser"
emoji: "πŸ€–"
colorFrom: "blue"
colorTo: "purple"
sdk: streamlit
app_file: app.py
pinned: false
license: "mit"
tags:
- ai
- browser
- agent
- llm
- streamlit
short_description: An autonomous browser agent for web-based tasks
---

# πŸ€– Agentic Browser

A powerful AI-powered browser agent that can help you with web-based tasks using local language models. This application provides a chat interface to interact with various open-source language models.

## 🌟 Features

- **Local AI Models**: Run models directly in your browser
- **Multiple Models**: Choose between different models based on your needs
- **Chat Interface**: Easy-to-use conversational interface
- **Real-time Responses**: Streaming text generation with visual feedback
- **Customizable**: Adjust parameters like temperature for different response styles

## πŸš€ Quick Start

1. Select a model from the sidebar
2. Click "Load Model" to initialize the selected model
3. Start chatting in the main interface
4. Adjust temperature for different response styles

## πŸ€– Available Models

- **TinyLlama**: Fast and lightweight, great for quick responses
- **Mistral-7B**: More powerful conversational model

## βš™οΈ Settings

- **Model Selection**: Choose from available models
- **Temperature**: Control response creativity (0.1 = deterministic, 1.0 = creative)
- **Real-time Loading**: Models are loaded on-demand for efficiency

## πŸ› οΈ Technical Details

This application uses:
- Streamlit for the web interface
- Hugging Face Transformers for model loading
- PyTorch for model inference
- Optimized for both CPU and GPU inference

## πŸ“„ License

This project is licensed under the MIT License.