Fix torch_logs tutorial to support CPU fallback and CUDA capability check#3848
Fix torch_logs tutorial to support CPU fallback and CUDA capability check#3848Sip4818 wants to merge 2 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3848
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @Sip4818! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
CLA signed. Happy to make any changes if needed. |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Claude finished @sekyondaMeta's task in 1m 37s —— View job Review of PR #3848
SummaryThe PR removes the early-exit pattern that skipped the entire tutorial on unsupported devices and instead introduces device selection logic so the tutorial always produces output. The overall intent is good — making the tutorial work in more environments is a clear improvement. Issues1. Trailing whitespace on line 45
else:
device = "cuda"
← trailing whitespace here
else:2. PEP 8: spaces around inputs = (torch.ones(2, 2, device = device), torch.zeros(2, 2, device = device))Per PEP 8, keyword arguments should not have spaces around inputs = (torch.ones(2, 2, device=device), torch.zeros(2, 2, device=device))3. Verbose comments (lines 35-37) The CLAUDE.md for this repo says to minimize comments and let code be self-explanatory. The three-line comment block: # Select device:
# - Use CUDA if available and supported
# - Otherwise fall back to CPUcould be shortened to a single line or removed entirely — the 4. Consider whether "Fusion Decisions" output is meaningful on CPU On CPU, OverallThe approach is sound. The two mechanical issues (trailing whitespace and keyword argument spacing) should be fixed before merge. The comment verbosity is a style nit. |
|
Fixed lint issues (whitespace and formatting). Ready for re-review. |
Fix torch_logs tutorial for CPU and unsupported CUDA devices
Problem
The current tutorial wraps the entire example in a CUDA capability check:
This causes the tutorial to output only a "Skipping..." message on:
As a result, users do not see any actual
torch.compilelogging output.Solution
Result
The tutorial now:
Addresses pytorch/pytorch#137285