My Account
Sign up, and you can customize which countdowns you see.
Sign up
Report problem with this ad
Start a related thread
Start a related poll
Reply via Boardmail
Dec 11, 2024
5:19
:25
pm
barnacle
All-American
I've been using qwen2.5, not sure if my machine can handle llama3.3 70b params.
I'm downloading it now to try it out. Hopefully they come out with some lower parameter count llama3.3 models soon.
Start a related thread
Start a related poll
Reply via Boardmail
Report problem with this ad
barnacle
Bio page
barnacle
Joined
Sep 10, 2007
Last login
Dec 11, 2024
Total posts
6,332 (335 FO)
Report problem with this ad
Messages
Author
Time
How am I supposed to get any work done if ChatGPT is down
Buffalo
4:52pm
claude
Deleted
4:53pm
here is what ChatGPT recommends
reagan21
4:54pm
Grok is better anyway
HarlemCoug
4:54pm
how long will it be down for? I'm way too dependent now
Heath Squashwell
5:01pm
Run your own model locally. Ollama can run pretty much any open source model.
barnacle
5:01pm
I second this, I'm running ollama with the latest llama models (meta)
franklyvulgar
5:11pm
I've been using qwen2.5, not sure if my machine can handle llama3.3 70b params.
barnacle
5:19pm
i'm not using the latest, I didn't notice 3.3, I don't think my computer can
franklyvulgar
5:25pm
Lol, it's running but super slow. Like one word per second.
barnacle
5:34pm
yup, for what I do the 3.2 works fine.
franklyvulgar
5:42pm
You mean I have to manually write all my dictionaries and lists?
rtNelson
5:10pm
It’ll be back
Zaphod
5:12pm
Report problem with this ad
Posting on CougarBoard
In order to post, you will need to either
sign up
or
log in
.
Report problem with this ad