by wsxiaoys on 4/6/23, 4:40 PM with 126 comments
Tabby is in its early stages, and we are excited to receive feedback from the community.
Its Github repository is located here: https://github.com/TabbyML/tabby.
We have also deployed the latest docker image to Huggingface for a live demo: https://huggingface.co/spaces/TabbyML/tabby.
Tabby is built on top of the popular Hugging Face Transformers / Triton FasterTransformer backend and is designed to be self-hosted, providing you with complete control over your data and privacy. In Tabby's next feature iteration, you can fine-tune the model to meet your project requirements.
by DanHulton on 4/6/23, 5:56 PM
The simpler the task I'm trying to do, the better chance it has of being correct, but that's also the part where I feel I get the most benefit from it, because I already thoroughly understand exactly what I'm writing, why I'm writing it, and what it needs to look like, and Copilot sometimes saves me the 5-30s it takes to write it. Over a day, that adds up and I can move marginally faster.
It's definitely not a 100x improvement (or even a 10x improvement), but I'm glad to have it.
If this works as well, locally, to escape the privacy issue, I'll be thrilled. Checking it out.
by the42thdoctor on 4/7/23, 12:20 AM
> https://tabbyml.notion.site/Compensation-Sheet-ad61218889ab4...
by vikp on 4/6/23, 5:59 PM
by uglycoyote on 4/6/23, 8:24 PM
But there's nothing in the introductory materials about how to train this thing.
by funnyfoobar on 4/7/23, 10:22 AM
Personally, I am working at a financial institution which is regulated, and the legal team + cyber are still evaluating if there would be any problems that could arise with CoPilot.
The way the Europe is heading towards AI regulation, Italy has already banned chatGPT. It seems likely that there would be a threat to Copilot as well in the Europe.
So these kind of solutions would make a ton of sense wrt organisations adopting this.
However as a developer who has used copilot before, here are my 2 cents:
The copilot makes a lot of sense, wrt auto completion and code generation, because it understands the current context of code.
It would be a friction for the developers if we are expecting them to use the user interface this project comes with.
It would be great for this project to go in the direction of the following:
- Ability for the developers to train the model with our custom projects, so it can give suggestions wrt to our style of coding
- Extensions for popular Editors like VIM, VS Code, IntelliJ etc
Happy to share more further info if needed :)
by mska on 4/6/23, 5:32 PM
Do they rely on legal contracts to prevent customers from using the software for free or modifying it for their own purposes?
by nmstoker on 4/6/23, 6:44 PM
I don't want to mark them down for poor language skills but the style of the comments on the TabbyML GitHub profile suggests a rather casual approach, and when combined with a lack of any serious documentation or even basic details beyond a sketched architecture diagram, I kind of wonder... Is there any particular context others can point to that I may be overlooking?
by Nic0 on 4/7/23, 6:49 AM
by nathancahill on 4/6/23, 5:03 PM
by simonw on 4/6/23, 6:41 PM
% docker run \
-it --rm \
-v ./data:/data \
-v ./data/hf_cache:/home/app/.cache/huggingface \
-p 5000:5000 \
-e MODEL_NAME=TabbyML/J-350M \
tabbyml/tabby
Unable to find image 'tabbyml/tabby:latest' locally
latest: Pulling from tabbyml/tabby
docker: no matching manifest for linux/arm64/v8 in the manifest list entries.
See 'docker run --help'.
I have an M2 Mac. I believe Docker is capable of running images compiled for different architectures using QEMU style workarounds, but is that something I can do with a one-liner or would I need to build a new image from scratch?Previous experiments with Docker and QEMU: https://til.simonwillison.net/docker/emulate-s390x-with-qemu
by shaunxcode on 4/7/23, 1:59 AM
by wongarsu on 4/6/23, 8:18 PM
Is this a limitation of the hosted demo or the chosen model, or do I simply have to wait a bit until my favorite niche language is supported?
by syntaxing on 4/6/23, 5:36 PM
by myin on 4/6/23, 5:06 PM
by akrymski on 4/6/23, 5:20 PM
by GartzenDeHaes on 4/6/23, 5:19 PM
by jslakro on 4/6/23, 7:07 PM
by boringuser2 on 4/7/23, 1:38 AM
GPT-4 can autogenerate most code a business needs, just need a lone engineer to keep it in check.
by ROFISH on 4/6/23, 11:36 PM
I restrict all usage of AI tools trained from publicly-sourced data because of an unknown copyright restriction, general unease, and lawsuits; however if this can be trained solely on my own codebases that are of clean providence, I can be 100% guaranteed against potential lawsuits.
Copilot is a cool tool, but super scary from a legal perspective. And even more heavily regulated industries (that I'm not in) would absolutely need their own firewalled version.
by hddqsb on 4/7/23, 11:53 AM
I had a look at the demo (https://huggingface.co/spaces/TabbyML/tabby) and wasn't too impressed with the generated code for the default sample prompt (binary search) -- it recurses infinitely if the item is missing. It would be interesting to compare with Copilot's output. No idea how one would go about fixing this (other than manually add a correct binary search implementation to the training data, which feels like cheating).
Request to https://tabbyml-tabby.hf.space/v1/completions:
{
"language": "python",
"prompt": "def binarySearch(arr, left, right, x):\n mid = (left +"
}
Response: {
"id": "cmpl-...",
"created": 1680867355,
"choices": [
{
"index": 0,
"text": " right) >> 1\n if x < arr[mid]:\n return binarySearch(arr, left, mid - 1, x)\n elif x > arr[mid]:\n return binarySearch(arr, mid + 1, right, x)\n else:\n return mid"
}
]
}
Formatted code: def binarySearch(arr, left, right, x):
mid = (left + right) >> 1
if x < arr[mid]:
return binarySearch(arr, left, mid - 1, x)
elif x > arr[mid]:
return binarySearch(arr, mid + 1, right, x)
else:
return mid
Manually written test cases: arr = [1, 3, 5, 7]
print(binarySearch(arr, 0, len(arr), 5)) # 2 (correct)
print(binarySearch(arr, 0, len(arr), 4)) # RecursionError
Runnable demo:https://tio.run/##lY/BCoJAEIbvPsUPXVw0yCyCKG@9QB3Fg@maC7bKuI...
by covi on 4/6/23, 9:29 PM
by unosama on 4/7/23, 2:51 AM
by moonchrome on 4/6/23, 8:29 PM
My main problem with copilot is latency/speed - I would shell out for 4090 if it meant I could use local copilot model that's super fast/low latency/explores deep suggestions.
by arco1991 on 4/6/23, 9:25 PM
by xvilka on 4/7/23, 3:11 AM
by grudg3 on 4/6/23, 10:32 PM
by accelbred on 4/9/23, 5:29 PM
I'm interested in this stuff but I won't be buying Nvidia cards.
by zoba on 4/6/23, 9:59 PM
by lfkdev on 4/6/23, 6:01 PM