Skip to content

Fix torch_logs tutorial to support CPU fallback and CUDA capability check#3848

Open
Sip4818 wants to merge 2 commits intopytorch:mainfrom
Sip4818:fix-torch-logs
Open

Fix torch_logs tutorial to support CPU fallback and CUDA capability check#3848
Sip4818 wants to merge 2 commits intopytorch:mainfrom
Sip4818:fix-torch-logs

Conversation

@Sip4818
Copy link
Copy Markdown

@Sip4818 Sip4818 commented May 1, 2026

Fix torch_logs tutorial for CPU and unsupported CUDA devices

Problem

The current tutorial wraps the entire example in a CUDA capability check:

if torch.cuda.get_device_capability() < (7, 0):
    print("Skipping because torch.compile is not supported on this device.")
else:
    # tutorial code

This causes the tutorial to output only a "Skipping..." message on:

  • CPU-only environments
  • CI builds without compatible GPUs

As a result, users do not see any actual torch.compile logging output.


Solution

  • Added device selection logic:
    • Use CUDA if available and supported (compute capability >= 7.0)
    • Otherwise, fall back to CPU
  • Ensured the tutorial runs regardless of the environment
  • Preserved a warning message when CUDA is available but unsupported

Result

The tutorial now:

  • Runs on CPU-only systems
  • Runs on supported CUDA devices
  • Produces meaningful logging output in all environments

Addresses pytorch/pytorch#137285

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 1, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3848

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented May 1, 2026

Hi @Sip4818!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@Sip4818
Copy link
Copy Markdown
Author

Sip4818 commented May 1, 2026

CLA signed. Happy to make any changes if needed.

@svekars svekars requested a review from mlazos May 1, 2026 22:00
@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented May 1, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla Bot added the cla signed label May 1, 2026
@sekyondaMeta
Copy link
Copy Markdown
Contributor

@claude

@claude
Copy link
Copy Markdown

claude Bot commented May 4, 2026

Claude finished @sekyondaMeta's task in 1m 37s —— View job


Review of PR #3848

  • Gather context and read changed files
  • Analyze the diff against origin/main
  • Provide code review feedback

Summary

The PR removes the early-exit pattern that skipped the entire tutorial on unsupported devices and instead introduces device selection logic so the tutorial always produces output. The overall intent is good — making the tutorial work in more environments is a clear improvement.

Issues

1. Trailing whitespace on line 45

git diff --check flags trailing whitespace on line 45 (the blank line after device = "cuda"). This will likely fail the repo's lintrunner checks.

    else:
        device = "cuda"trailing whitespace here
else:

Fix this →

2. PEP 8: spaces around = in keyword arguments (line 55)

inputs = (torch.ones(2, 2, device = device), torch.zeros(2, 2, device = device))

Per PEP 8, keyword arguments should not have spaces around =. This should be:

inputs = (torch.ones(2, 2, device=device), torch.zeros(2, 2, device=device))

Fix this →

3. Verbose comments (lines 35-37)

The CLAUDE.md for this repo says to minimize comments and let code be self-explanatory. The three-line comment block:

# Select device:
# - Use CUDA if available and supported
# - Otherwise fall back to CPU

could be shortened to a single line or removed entirely — the if/else logic below is self-evident.

4. Consider whether "Fusion Decisions" output is meaningful on CPU

On CPU, torch.compile uses the inductor backend which may produce different (or empty) fusion/output_code logs compared to CUDA. The tutorial's prose doesn't mention this. It might be helpful to add a brief note that some logging output (particularly fusion decisions and output code) will differ between CPU and CUDA backends. This is a minor suggestion, not a blocker.

Overall

The approach is sound. The two mechanical issues (trailing whitespace and keyword argument spacing) should be fixed before merge. The comment verbosity is a style nit.


@Sip4818
Copy link
Copy Markdown
Author

Sip4818 commented May 4, 2026

Fixed lint issues (whitespace and formatting). Ready for re-review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants