{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":463628495,"defaultBranch":"main","name":"executorch","ownerLogin":"pytorch","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2022-02-25T17:58:31.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/21003710?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717733543.0","currentOid":""},"activityList":{"items":[{"before":"e567cfdc9185285aa7efd04c743faed80233e9ff","after":"a82430363ef5d25ff51be1adeb3f1edbf435b39e","ref":"refs/heads/nightly","pushedAt":"2024-06-07T11:34:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pytorchbot","name":null,"path":"/pytorchbot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/21957446?s=80&v=4"},"commit":{"message":"2024-06-07 nightly release (ff4e9edc55c86953ae33dbbaaacab8c9949dcb74)","shortMessageHtmlLink":"2024-06-07 nightly release (ff4e9ed)"}},{"before":null,"after":"137ac2d355c74df8fc1f07ea0cb4bf5a81cf9a73","ref":"refs/heads/gh/jorgep31415/72/orig","pushedAt":"2024-06-07T04:12:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"[ET-VK][EZ] Inline test custom_pass\n\n`MeanToSumDiv()` and the upcoming `I64toI32()` should be compatible with all ET-VK models. Hence, we apply them to all Python tests.\n\nDifferential Revision: [D58272547](https://our.internmc.facebook.com/intern/diff/D58272547/)\n\nghstack-source-id: 229305638\nPull Request resolved: https://github.com/pytorch/executorch/pull/3895","shortMessageHtmlLink":"[ET-VK][EZ] Inline test custom_pass"}},{"before":"94d0c8be250482e94619ce2c05bdbaa6792cb03e","after":"4e0d09cf29552700c637d42b2998c82bc2f81ddc","ref":"refs/heads/gh/jorgep31415/61/orig","pushedAt":"2024-06-07T04:12:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"[ET-VK] Test int64 dtype\n\nPull Request resolved: https://github.com/pytorch/executorch/pull/3728\n\nUsing the `I64toI32()` export pass, we now support i64 input and constants (tensors defined in the `nn.Module`) and we test the following cases:\n- `torch.randint()` i64 input\n- `torch.tensor()` i64 input / constants\n- `torch.arange()` i64 input / constants\n\n@bypass-github-export-checks\n@bypass-github-pytorch-ci-checks\n@bypass-github-executorch-ci-checks\n\nDifferential Revision: [D57649649](https://our.internmc.facebook.com/intern/diff/D57649649/)\nghstack-source-id: 229306240","shortMessageHtmlLink":"[ET-VK] Test int64 dtype"}},{"before":"334c727cf70e259dc1b212251c553fba46980f1f","after":"aa75559f04ddd1b45be670de8abf05698f6dc8fc","ref":"refs/heads/gh/jorgep31415/60/orig","pushedAt":"2024-06-07T04:12:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"[ET-EXIR] Introduce I64toI32 export pass\n\nPull Request resolved: https://github.com/pytorch/executorch/pull/3727\n\n## Context\n\nA number of `nn.Module`s targeting the Vulkan delegate use `i64` dtype for operators, inputs, and outputs. This is because `i64` is the default for many `torch` functions. Since [`i64` dtype is a Vulkan extension](https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_KHR_shader_atomic_int64.html), it is not supported on all Vulkan devices. Hence, we introduce an export pass that converts the majority of the graph to `i32` dtype:\n1. For each `i64` dtype input, append a node to convert the output tensor to `i32` dtype.\n2. For each operator yielding `i64` dtype, replace it with `i32` dtype.\n3. For each `i64` dtype output, prepend a node to convert the input tensor to `i64` dtype.\n\n## Example\nTake this simple model compiled to one Clamp operation with input `x = (torch.randint(low=-100, high=100, size=(5, 5)),)`. By default, `torch.randint` uses `i64` dtype.\n```\nclass ClampModule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n\n def forward(self, x):\n x = torch.clamp(x, min=-3)\n return x\n```\n\nIf it's compiled with such `i64` input, the export pass rewrites the graph to be the equivalent of the following.\n```\nclass GoalModule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n\n def forward(self, x):\n x = x.to(torch.int32)\n x = torch.clamp(x, min=-3)\n x = x.to(torch.int64)\n return x\n```\n\nghstack-source-id: 229306241\n@exported-using-ghexport\n\nDifferential Revision: [D57649650](https://our.internmc.facebook.com/intern/diff/D57649650/)","shortMessageHtmlLink":"[ET-EXIR] Introduce I64toI32 export pass"}},{"before":"b6cf7739d077daa499756a46acd9d531b0a69e96","after":"84ab7b92824f0cc488c168a0347c8b85b6e7c9b2","ref":"refs/heads/gh/jorgep31415/61/head","pushedAt":"2024-06-07T04:12:41.000Z","pushType":"push","commitsCount":106,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"Update on \"[ET-VK] Test clamp with int64 dtype\"\n\n\nTesting method requires\n1. executing the eager-mode model with int64 dtype,\n2. executing the ET-VK model with int32 dtype, and\n3. narrowing eager-mode result to int32 dtype before comparing.\n\nDifferential Revision: [D57649649](https://our.internmc.facebook.com/intern/diff/D57649649/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update on \"[ET-VK] Test clamp with int64 dtype\""}},{"before":"6137bbdb957df8f7ed1ee9e3ab3dd454bf7651df","after":"97b516d2d04d4fc2bf760328a4970682462aa5c9","ref":"refs/heads/gh/jorgep31415/60/head","pushedAt":"2024-06-07T04:12:41.000Z","pushType":"push","commitsCount":106,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"Update on \"[ET-EXIR] Introduce export I64ToI32DtypePass\"\n\n\n## Context\n\nA number of `nn.Module`s targeting the Vulkan delegate use `int64` dtype for operators, inputs, and outputs. This is because `int64` is the default for many `torch` functions. Since `int64` is not yet supported in the ET-VK delegate, and since we don't actually need the full `int64` range we can convert all instances of `int64` dtypes to `int32` dtypes.\n\nWe are placing it among the ET passes as opposed to [the backend passes](https://github.com/pytorch/executorch/tree/04b99b7cd785895846953691ce124c6414a1e839/backends/transforms) since this should apply to the whole graph, not just the VK subgraph, to take care of `nn.Module`s with `int64` dtype inputs/outputs.\n\n## Implementation\nUses what little I know about identifying operators, inputs, outputs in a `GraphModule` in a similar style to [`vulkan_graph_builder.py`](https://github.com/pytorch/executorch/blob/04b99b7cd785895846953691ce124c6414a1e839/backends/vulkan/serialization/vulkan_graph_builder.py#L316-L329).\n\nDifferential Revision: [D57649650](https://our.internmc.facebook.com/intern/diff/D57649650/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update on \"[ET-EXIR] Introduce export I64ToI32DtypePass\""}},{"before":"6137bbdb957df8f7ed1ee9e3ab3dd454bf7651df","after":"c076a9aea962653bbcf44c5c19ffc324394fb8d4","ref":"refs/heads/gh/jorgep31415/61/base","pushedAt":"2024-06-07T04:12:40.000Z","pushType":"push","commitsCount":105,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"Update base for Update on \"[ET-VK] Test clamp with int64 dtype\"\n\n\nTesting method requires\n1. executing the eager-mode model with int64 dtype,\n2. executing the ET-VK model with int32 dtype, and\n3. narrowing eager-mode result to int32 dtype before comparing.\n\nDifferential Revision: [D57649649](https://our.internmc.facebook.com/intern/diff/D57649649/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update base for Update on \"[ET-VK] Test clamp with int64 dtype\""}},{"before":"3c43fd6946d515722ec44dd1c2c8222915bb6b9b","after":"cddc80fc3fbc721b6e2543d555d6936cc9569aab","ref":"refs/heads/gh/jorgep31415/60/base","pushedAt":"2024-06-07T04:12:40.000Z","pushType":"push","commitsCount":105,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"Update base for Update on \"[ET-EXIR] Introduce export I64ToI32DtypePass\"\n\n\n## Context\n\nA number of `nn.Module`s targeting the Vulkan delegate use `int64` dtype for operators, inputs, and outputs. This is because `int64` is the default for many `torch` functions. Since `int64` is not yet supported in the ET-VK delegate, and since we don't actually need the full `int64` range we can convert all instances of `int64` dtypes to `int32` dtypes.\n\nWe are placing it among the ET passes as opposed to [the backend passes](https://github.com/pytorch/executorch/tree/04b99b7cd785895846953691ce124c6414a1e839/backends/transforms) since this should apply to the whole graph, not just the VK subgraph, to take care of `nn.Module`s with `int64` dtype inputs/outputs.\n\n## Implementation\nUses what little I know about identifying operators, inputs, outputs in a `GraphModule` in a similar style to [`vulkan_graph_builder.py`](https://github.com/pytorch/executorch/blob/04b99b7cd785895846953691ce124c6414a1e839/backends/vulkan/serialization/vulkan_graph_builder.py#L316-L329).\n\nDifferential Revision: [D57649650](https://our.internmc.facebook.com/intern/diff/D57649650/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update base for Update on \"[ET-EXIR] Introduce export I64ToI32DtypePass\""}},{"before":null,"after":"04c260885f0d0e4c5fdb896b9ecc46366ac4079e","ref":"refs/heads/gh/jorgep31415/72/head","pushedAt":"2024-06-07T04:12:23.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"[ET-VK][EZ] Inline test custom_pass\n\n`MeanToSumDiv()` and the upcoming `I64toI32()` should be compatible with all ET-VK models. Hence, we apply them to all Python tests.\n\nDifferential Revision: [D58272547](https://our.internmc.facebook.com/intern/diff/D58272547/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"[ET-VK][EZ] Inline test custom_pass"}},{"before":null,"after":"6554fa544b7d50db7a89dce8fdcaff667ec4a9d7","ref":"refs/heads/gh/jorgep31415/72/base","pushedAt":"2024-06-07T04:12:23.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jorgep31415","name":"Jorge Pineda","path":"/jorgep31415","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32918197?s=80&v=4"},"commit":{"message":"Add colab/jupyter notebook in getting started page (#3885)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3885\n\nbypass-github-export-checks\nbypass-github-pytorch-ci-checks\nbypass-github-executorch-ci-checks\n\nbuild-break\noverriding_review_checks_triggers_an_audit_and_retroactive_review\n\nOncall Short Name: executorch\n\nReviewed By: mcr229, cccclai\n\nDifferential Revision: D58262970\n\nfbshipit-source-id: 0777670706e4a949ffd2bf9e82b77d968f39ee1a","shortMessageHtmlLink":"Add colab/jupyter notebook in getting started page (#3885)"}},{"before":null,"after":"d52c39ffb86c1035884b7ad3108f0c492837816a","ref":"refs/heads/export-D58207691","pushedAt":"2024-06-07T03:04:48.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Change the elementwise broadcasting contract from graph to kernel\n\nSummary:\nCurrently, there is a graph level pass to handle limited broadcasting of elementwise ops if the input tensors are not of the same size.\n\nWe move this responsibility down to the kernels with this diff, which is how ET and the portable ops do it. Ops of this kind are only `add`, `sub`, `mul` and `div` for now, but there will be more.\n\nWe retain the implementations for the reference kernels, because we want to avoid linking the portable ops directly, which takes forever at compile time. We can also use a much smaller set of types (basically only `float`).\n\nWe can remove a hack in the RNNT Joiner with this change, and run it natively. It takes a huge hit in performance, which will be fixed by getting broadcast-friendly kernels from Cadence.\n\nWe finally remove the binop tests in `test_aten_ops.py`, which were also using strange types and had been on the chopping block for a while.\n\nDifferential Revision: D58207691","shortMessageHtmlLink":"Change the elementwise broadcasting contract from graph to kernel"}},{"before":"9841ee13f7dccba334f24962a45b061a20d3bc5b","after":"d1b4c61c93ba09ddfb2ad8b931ceaf00af07ff45","ref":"refs/heads/update-pytorch-commit-hash/8677863676-101-1","pushedAt":"2024-06-07T02:21:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"pytorchupdatebot","name":"PyTorch UpdateBot","path":"/pytorchupdatebot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/133916390?s=80&v=4"},"commit":{"message":"update pytorch commit hash","shortMessageHtmlLink":"update pytorch commit hash"}},{"before":"b3d19bcb816c555dfa7135bb6d968c5b986b2f30","after":"131f275eb5d14ebaa1130c5fb3947a23cc0dbef1","ref":"refs/heads/gh/SS-JIA/50/orig","pushedAt":"2024-06-07T02:05:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Add support for int8 textures and buffers\n\nPull Request resolved: https://github.com/pytorch/executorch/pull/3892\n\n## Context\n\nAs title. This changeset adds support for Tensors that have the dtype `api::kChar` or `api::kQInt8` for both buffer and texture storage.\nghstack-source-id: 229299804\n@exported-using-ghexport\n\nDifferential Revision: [D58263388](https://our.internmc.facebook.com/intern/diff/D58263388/)","shortMessageHtmlLink":"[ET-VK] Add support for int8 textures and buffers"}},{"before":"f1989a832d860b212188ff4e22193fbef75eb5a7","after":"7268ab997a0b269223de3a3ac8a531c742a8dae5","ref":"refs/heads/gh/SS-JIA/51/orig","pushedAt":"2024-06-07T02:05:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Implement weight_int8packed_mm\n\nPull Request resolved: https://github.com/pytorch/executorch/pull/3893\n\n## Context\n\nAs title, this changesets implement the `aten._weight_int8packed_mm` operator. The operator implements a linear layer where the weight is quantized symmetrically to 8 bits for each \"group\".\n\nDifferential Revision: [D58263387](https://our.internmc.facebook.com/intern/diff/D58263387/)\nghstack-source-id: 229299805","shortMessageHtmlLink":"[ET-VK] Implement weight_int8packed_mm"}},{"before":"a09690fdea1d086a9e678410c013e79114d6350f","after":"5b30ede591642d6a18d6eab813892ca8fd28d73e","ref":"refs/heads/gh/SS-JIA/51/head","pushedAt":"2024-06-07T02:05:14.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Update on \"[ET-VK] Implement weight_int8packed_mm\"\n\n\n## Context\n\nAs title, this changesets implement the `aten._weight_int8packed_mm` operator. The operator implements a linear layer where the weight is quantized symmetrically to 8 bits for each \"group\".\n\nDifferential Revision: [D58263387](https://our.internmc.facebook.com/intern/diff/D58263387/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update on \"[ET-VK] Implement weight_int8packed_mm\""}},{"before":"43f9fe21107162dfc2b4276226bd9d0d8f348ab2","after":"d59e21c456a03a6d336d1576f1fc08d8e248fb75","ref":"refs/heads/gh/SS-JIA/50/head","pushedAt":"2024-06-07T02:05:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Update on \"[ET-VK] Add support for int8 textures and buffers\"\n\n\n## Context\n\nAs title. This changeset adds support for Tensors that have the dtype `api::kChar` or `api::kQInt8` for both buffer and texture storage.\n\nDifferential Revision: [D58263388](https://our.internmc.facebook.com/intern/diff/D58263388/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update on \"[ET-VK] Add support for int8 textures and buffers\""}},{"before":"43f9fe21107162dfc2b4276226bd9d0d8f348ab2","after":"3ce9f2a99925179ac9f6517d2f85e9c100094528","ref":"refs/heads/gh/SS-JIA/51/base","pushedAt":"2024-06-07T02:05:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Update base for Update on \"[ET-VK] Implement weight_int8packed_mm\"\n\n\n## Context\n\nAs title, this changesets implement the `aten._weight_int8packed_mm` operator. The operator implements a linear layer where the weight is quantized symmetrically to 8 bits for each \"group\".\n\nDifferential Revision: [D58263387](https://our.internmc.facebook.com/intern/diff/D58263387/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update base for Update on \"[ET-VK] Implement weight_int8packed_mm\""}},{"before":null,"after":"b3d19bcb816c555dfa7135bb6d968c5b986b2f30","ref":"refs/heads/gh/SS-JIA/50/orig","pushedAt":"2024-06-07T01:59:17.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Add support for int8 textures and buffers\n\n## Context\n\nAs title. This changeset adds support for Tensors that have the dtype `api::kChar` or `api::kQInt8` for both buffer and texture storage.\n\nDifferential Revision: [D58263388](https://our.internmc.facebook.com/intern/diff/D58263388/)\n\nghstack-source-id: 229297476\nPull Request resolved: https://github.com/pytorch/executorch/pull/3892","shortMessageHtmlLink":"[ET-VK] Add support for int8 textures and buffers"}},{"before":null,"after":"f1989a832d860b212188ff4e22193fbef75eb5a7","ref":"refs/heads/gh/SS-JIA/51/orig","pushedAt":"2024-06-07T01:59:17.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Implement weight_int8packed_mm\n\n## Context\n\nAs title, this changesets implement the `aten._weight_int8packed_mm` operator. The operator implements a linear layer where the weight is quantized symmetrically to 8 bits for each \"group\".\n\nDifferential Revision: [D58263387](https://our.internmc.facebook.com/intern/diff/D58263387/)\n\nghstack-source-id: 229299357\nPull Request resolved: https://github.com/pytorch/executorch/pull/3893","shortMessageHtmlLink":"[ET-VK] Implement weight_int8packed_mm"}},{"before":null,"after":"43f9fe21107162dfc2b4276226bd9d0d8f348ab2","ref":"refs/heads/gh/SS-JIA/51/base","pushedAt":"2024-06-07T01:59:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Add support for int8 textures and buffers\n\n## Context\n\nAs title. This changeset adds support for Tensors that have the dtype `api::kChar` or `api::kQInt8` for both buffer and texture storage.\n\nDifferential Revision: [D58263388](https://our.internmc.facebook.com/intern/diff/D58263388/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"[ET-VK] Add support for int8 textures and buffers"}},{"before":null,"after":"a09690fdea1d086a9e678410c013e79114d6350f","ref":"refs/heads/gh/SS-JIA/51/head","pushedAt":"2024-06-07T01:59:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Implement weight_int8packed_mm\n\n## Context\n\nAs title, this changesets implement the `aten._weight_int8packed_mm` operator. The operator implements a linear layer where the weight is quantized symmetrically to 8 bits for each \"group\".\n\nDifferential Revision: [D58263387](https://our.internmc.facebook.com/intern/diff/D58263387/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"[ET-VK] Implement weight_int8packed_mm"}},{"before":null,"after":"04674a0ab28b730c68a9b729a7d074563284b87c","ref":"refs/heads/gh/SS-JIA/50/base","pushedAt":"2024-06-07T01:58:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"Fix example code in `Running an ExecuTorch Model in C++ Tutorial` (#3868)\n\nSummary:\nWe need to pass a pointer to the DataLoader.\nhttps://github.com/pytorch/executorch/blob/066b50ba270240c003d18c7af273e031a28a79d4/runtime/executor/program.h#L76-L78\n\nPull Request resolved: https://github.com/pytorch/executorch/pull/3868\n\nReviewed By: mergennachin\n\nDifferential Revision: D58242442\n\nPulled By: JacobSzwejbka\n\nfbshipit-source-id: c6cd070585b9e2ee4bdd74e08799ff35d3a44842","shortMessageHtmlLink":"Fix example code in Running an ExecuTorch Model in C++ Tutorial (#3868"}},{"before":null,"after":"43f9fe21107162dfc2b4276226bd9d0d8f348ab2","ref":"refs/heads/gh/SS-JIA/50/head","pushedAt":"2024-06-07T01:58:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"SS-JIA","name":"Sicheng Stephen Jia","path":"/SS-JIA","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7695547?s=80&v=4"},"commit":{"message":"[ET-VK] Add support for int8 textures and buffers\n\n## Context\n\nAs title. This changeset adds support for Tensors that have the dtype `api::kChar` or `api::kQInt8` for both buffer and texture storage.\n\nDifferential Revision: [D58263388](https://our.internmc.facebook.com/intern/diff/D58263388/)\n\n[ghstack-poisoned]","shortMessageHtmlLink":"[ET-VK] Add support for int8 textures and buffers"}},{"before":"f13d22d4bd90341dca8f622753faccf5b1891b95","after":null,"ref":"refs/tags/ciflow/periodic/3887","pushedAt":"2024-06-07T01:02:26.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pytorch-bot[bot]","name":null,"path":"/apps/pytorch-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/40112?s=80&v=4"}},{"before":"1490145b2831e5fb7cd533d5158022b2ccf155cd","after":"66cc13b793d719755281baeb89960b2bb004e497","ref":"refs/heads/gh-pages","pushedAt":"2024-06-07T00:39:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Auto-generating sphinx docs","shortMessageHtmlLink":"Auto-generating sphinx docs"}},{"before":"3a9f1a4b5b6cade079573d7e43a363b7ab6d5f54","after":"1490145b2831e5fb7cd533d5158022b2ccf155cd","ref":"refs/heads/gh-pages","pushedAt":"2024-06-07T00:01:43.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Auto-generating sphinx docs","shortMessageHtmlLink":"Auto-generating sphinx docs"}},{"before":"d2b840906134e1cfb9986dff76cef53b8274b41f","after":"3a9f1a4b5b6cade079573d7e43a363b7ab6d5f54","ref":"refs/heads/gh-pages","pushedAt":"2024-06-06T23:51:24.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Auto-generating sphinx docs","shortMessageHtmlLink":"Auto-generating sphinx docs"}},{"before":"d08dd2ff52e4c75fd5c8d5d6db0c35e91456bbc5","after":"a21e30f55e95cedc335fc19b636a75f4dc141faf","ref":"refs/heads/release/0.2","pushedAt":"2024-06-06T23:50:51.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"mcr229","name":"Max Ren","path":"/mcr229","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/40742183?s=80&v=4"},"commit":{"message":"Add colab/jupyter notebook in getting started page (#3885) (#3889)\n\nSummary:\r\nPull Request resolved: https://github.com/pytorch/executorch/pull/3885\r\n\r\nbypass-github-export-checks\r\nbypass-github-pytorch-ci-checks\r\nbypass-github-executorch-ci-checks\r\n\r\nbuild-break\r\noverriding_review_checks_triggers_an_audit_and_retroactive_review\r\n\r\nOncall Short Name: executorch\r\n\r\nReviewed By: mcr229, cccclai\r\n\r\nDifferential Revision: D58262970\r\n\r\nfbshipit-source-id: 0777670706e4a949ffd2bf9e82b77d968f39ee1a\r\n(cherry picked from commit 6554fa544b7d50db7a89dce8fdcaff667ec4a9d7)\r\n\r\nCo-authored-by: Mergen Nachin ","shortMessageHtmlLink":"Add colab/jupyter notebook in getting started page (#3885) (#3889)"}},{"before":null,"after":"ab08d878d785dd4865637750c0a58a59c5c0b375","ref":"refs/heads/cherry-pick-3885-by-pytorch_bot_bot_","pushedAt":"2024-06-06T23:49:28.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"pytorchbot","name":null,"path":"/pytorchbot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/21957446?s=80&v=4"},"commit":{"message":"Add colab/jupyter notebook in getting started page (#3885)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3885\n\nbypass-github-export-checks\nbypass-github-pytorch-ci-checks\nbypass-github-executorch-ci-checks\n\nbuild-break\noverriding_review_checks_triggers_an_audit_and_retroactive_review\n\nOncall Short Name: executorch\n\nReviewed By: mcr229, cccclai\n\nDifferential Revision: D58262970\n\nfbshipit-source-id: 0777670706e4a949ffd2bf9e82b77d968f39ee1a\n(cherry picked from commit 6554fa544b7d50db7a89dce8fdcaff667ec4a9d7)","shortMessageHtmlLink":"Add colab/jupyter notebook in getting started page (#3885)"}},{"before":"31b766b868b0fc93cf192e0d2944ff7866bbea68","after":"6554fa544b7d50db7a89dce8fdcaff667ec4a9d7","ref":"refs/heads/main","pushedAt":"2024-06-06T23:44:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add colab/jupyter notebook in getting started page (#3885)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/executorch/pull/3885\n\nbypass-github-export-checks\nbypass-github-pytorch-ci-checks\nbypass-github-executorch-ci-checks\n\nbuild-break\noverriding_review_checks_triggers_an_audit_and_retroactive_review\n\nOncall Short Name: executorch\n\nReviewed By: mcr229, cccclai\n\nDifferential Revision: D58262970\n\nfbshipit-source-id: 0777670706e4a949ffd2bf9e82b77d968f39ee1a","shortMessageHtmlLink":"Add colab/jupyter notebook in getting started page (#3885)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEXydfDQA","startCursor":null,"endCursor":null}},"title":"Activity ยท pytorch/executorch"}