we are the tiny corp

  • Do you find yourself millions of dollars in the hole building an AI accelerator chip?
  • Jumped the gun and built hardware before you had the software ready?
  • Have a SoC with a hard to use Neural Network SDK? (Qualcomm, Rockchip)
  • Just wish your accelerator was faster? (NVIDIA, Google)
  • The tiny corp can help. We can port our framework to any accelerator.


    We write and maintain tinygrad, the fastest growing neural network framework (almost 9000 GitHub stars)

    It's extremely simple, and breaks down the most complex networks into 4 OpTypes

  • UnaryOps operate on one tensor and run elementwise. RELU, LOG, RECIPROCAL, etc...
  • BinaryOps operate on two tensors and run elementwise to return one. ADD, MUL, etc...
  • ReduceOps operate on one tensor and return a smaller tensor. SUM, MAX
  • MovementOps operate on one tensor and move the data around, copy-free with ShapeTracker. RESHAPE, PERMUTE, EXPAND, etc...
  • But how...where are your CONVs and MATMULs? Read the code to solve this mystery.


    FAQ:

    Is tinygrad used anywhere?

    tinygrad is used in openpilot to run the driving model on the Snapdragon 845 GPU. It replaces SNPE, is faster, supports loading onnx files, supports training, and allows for attention (SNPE only allows fixed weights).


    Is tinygrad inference only?

    No! It supports full forward and backward passes with autodiff. This is implemented at a level of abstraction higher than the accelerator specific code, so a tinygrad port gets you this for free.


    How can I use tinygrad for my next ML project?

    Follow the installation instructions on the tinygrad repo. It has a similar API to PyTorch, yet simpler and more refined. Less stable though while tinygrad is in alpha, so be warned, though it's been fairly stable for a while.


    When will tinygrad leave alpha?

    When we can reproduce a common set of papers on 1 NVIDIA GPU 2x faster than PyTorch. We also want the speed to be good on the M1. ETA, Q2 next year.


    How is tinygrad faster than PyTorch?

    For most use cases it isn't yet, but it will be. It has three advantages:
  • It compiles a custom kernel for every operation, allowing extreme shape specialization.
  • All tensors are lazy, so it can aggressively fuse operations.
  • The backend is 10x+ simpler, meaning optimizing one kernel makes everything fast.


  • How can I work with the tiny corporation?

    Email me, geohot@gmail.com. We are looking for contracts and sponsorships to improve various aspects of tinygrad. I would also consider an internship where I work on tinygrad in the context of a company.


    How can I work for the tiny corporation?

    Contribute to tinygrad on GitHub.


    Can I invest in the tiny corporation?

    It's too tiny to have any equity to sell. We are looking for contracts and sponsorships.


    When are you launching your governance token? Will there be an airdrop?

    Bruh.