by stellaathena on 3/2/23, 7:12 PM with 105 comments
by dvt on 3/2/23, 8:10 PM
What we do need is more weights being released in the public domain (hard to find even on Huggingface), easier ways to train models locally, better pruned models for embedded device inference (e.g. running on a Jetson Nano), easier ways to fine-tune for specific contexts, and so on. My big gripe, and for obvious reasons, is that we need to step away from cloud-based inference, and it doesn't seem like anyone's working on that.
by rahidz on 3/2/23, 9:13 PM
by jackblemming on 3/2/23, 7:29 PM
by supermatt on 3/3/23, 7:29 AM
Heres my addition! Or subtraction, i guess... (-4 chars by changing call to AdaptiveAvgPool2d):
from torch.nn import*
def c(h,d,k,p,n):S,C,A=Sequential,Conv2d,lambda x:S(x,GELU(),BatchNorm2d(h));R=type('',(S,),{'forward':lambda s,x:s0+x});return S(A(C(3,h,p,p)),*[S(R(A(C(h,h,k,1,k//2,1,h))),A(C(h,h,1)))for _ in[0]*d],AdaptiveAvgPool2d(1),Flatten(),Linear(h,n))
by bilsbie on 3/2/23, 10:26 PM
by victorbjorklund on 3/2/23, 7:54 PM
by victor9000 on 3/3/23, 4:34 AM
by eachro on 3/2/23, 7:26 PM
by valzam on 3/2/23, 9:59 PM
by hinkley on 3/2/23, 8:46 PM
by quartzbox on 3/2/23, 8:51 PM
by return_to_monke on 3/2/23, 7:55 PM
but. didn't openai start as a nonprofit, too?
by whitten on 3/2/23, 8:46 PM