Commit Graph

55 Commits

Author SHA1 Message Date
9fb9d7e8e1 Implement getting layer, weights and biases 2024-04-16 19:09:41 +02:00
f4ae45f867 Start implementing weights import 2024-04-15 22:17:48 +02:00
18522c2dea Cleanup and refactor 2024-04-11 22:52:41 +02:00
4b9d123e94 Implement device vector utils 2024-04-11 22:22:33 +02:00
710a33bdde Move softmax partial kernels to matmul 2024-04-11 22:01:47 +02:00
e86e04f6d6 Add clearing kernel 2024-04-11 19:49:09 +02:00
b49dddf34a Improve softmax numerical stability 2024-04-08 23:25:46 +02:00
e419a93408 Fix softmax sum kernel 2024-04-08 22:09:18 +02:00
9482d7bc43 Add model predict test 2024-03-22 22:31:32 +01:00
87db47089e Add output layer to model predict 2024-03-21 23:22:12 +01:00
90fb104dae Implement output layer 2024-03-21 23:07:46 +01:00
a9d0a0832a Change model input layer creation 2024-03-21 00:24:49 +01:00
af6838e8ae Initial model implementation 2024-03-20 22:31:39 +01:00
6f4cdf3792 Implement avg pool test 2024-03-20 21:57:22 +01:00
dfff0360d9 Implement max pooling test 2024-03-20 21:44:04 +01:00
ef63cbd9f1 Implement avg pooling 2024-03-19 22:33:43 +01:00
a0fc1b00ae Implement max pooling layer 2024-03-19 22:04:58 +01:00
b6c4b7d2ae Refactor layers 2024-03-19 21:35:05 +01:00
8d14b74f66 Implement Add layer 2024-03-18 20:37:13 +01:00
d9c6c663c8 Rename ILayer to WeightedLayer 2024-03-18 20:36:52 +01:00
6cf604423a Combine padding and conv kernel 2024-03-18 19:53:40 +01:00
e6d3757312 Change unsigned int to int 2024-03-18 19:40:00 +01:00
aac0c3a826 Implement concat layer 2024-03-17 21:38:29 +01:00
42d646750b Abstract activation and implement softmax 2024-03-17 18:37:15 +01:00
0c22fac64e Add toplevel CUDANet namespace 2024-03-17 16:08:53 +01:00
dc86cddeb7 Use tiling shmem for mat vec mul kernel 2024-03-15 23:33:09 +01:00
88f7fff217 Add prefix to guards 2024-03-13 22:23:23 +01:00
7157a27e56 Add documentation comments 2024-03-12 21:50:06 +01:00
708164e4d0 Implement simple input layer 2024-03-12 21:16:46 +01:00
9d91896f13 Change forward function to return output pointer 2024-03-12 20:50:49 +01:00
d2ab78fbc7 Add Kernels namespace 2024-03-11 21:04:23 +01:00
e0178e2d5c Cleanup and refactor 2024-03-11 20:39:44 +01:00
f3112311da Make conv2d work again 2024-03-10 19:13:22 +01:00
d177a67cd6 Add bias to conv2d 2024-03-09 23:03:23 +01:00
e51aabc2f2 Initial cuda conv kernel implementation 2024-03-08 23:35:54 +01:00
4b6fcbc191 Implement simple test for host conv2d 2024-03-08 23:12:04 +01:00
69ccba2dad Start conv test implementation 2024-03-07 22:03:05 +01:00
fc2c1616b4 Initial cpu conv implementation 2024-03-07 21:24:59 +01:00
f4257afd5a Remove cublas dependency 2024-03-05 18:41:35 +01:00
98ad84c659 Add matrix math kernels 2024-03-05 17:38:46 +01:00
cfc5c46d5e Initialize conv2d layer 2024-03-04 22:16:03 +01:00
f37320594a Add activations enum 2024-03-03 15:24:54 +01:00
019ccc33d9 Start implementing padding kernel 2024-02-29 22:21:48 +01:00
045359cca2 Remove not needed code 2024-02-29 22:21:32 +01:00
b1eb8b5806 Add activations test 2024-02-27 20:19:17 +01:00
48ba09b28d Format source code using clang-format 2024-02-27 18:52:12 +01:00
5e1e0ed1d1 Initial activations implementation 2024-02-27 00:24:57 +01:00
6e99525ad0 Rename hheader files to .cuh 2024-02-26 19:53:46 +01:00
035f3b053b Rename files to .cu and fix IDX2C usage 2024-02-21 20:03:04 +01:00
02fc9e4e8b Use IDX2C macro properly 2024-02-19 22:26:54 +01:00