I do a one-line version of that, but MPS implementation was disappointing only time I benchmarked. Been hoping to try CUDA vs MPS vs native Mac NN API over Christmas on new M4. Happiest if PyTorch MPS took big step forward.
Not that I know of, but as wasteful as it is, I have had a dedicated Windows machine in my office for local work because on my M1 Mac Mini it was just too slow. I'm going to try again with my new M4 Pro but also want to see Mac native (which I will guess is faster than Pytorch MPS but by how much?)
Comments
raise Exception(“Woah there partner, go get better hardware”)
`DEVICE = torch.device("cuda" if https://torch.cuda.is_available() else "cpu")`
In all my projects as a global across the project, very handy
https://pytorch.org/docs/stable/xpu.html