-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Fix: safely create a new page if no page exists in persistent context #1211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix: safely create a new page if no page exists in persistent context #1211
Conversation
""" WalkthroughThe Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant BrowserManager
participant Context
Caller->>BrowserManager: get_page(url)
BrowserManager->>Context: get pages
alt live pages exist
BrowserManager->>Context: select first live page
else no live pages
BrowserManager->>Context: await new_page()
end
BrowserManager-->>Caller: return page
Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Poem
""" ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Praneeth1-O-1 pls provide a test script. I'll take a look. thanks |
I have added an example test script in tests/test_browser.py, take a look and get back to me if you face any issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/test_browser.py (3)
4-4
: Remove the unusedjson
import.
json
is never referenced in the module and triggers Ruff F401.
Deleting it keeps the test clean and avoids CI lint failures.-import json
🧰 Tools
🪛 Ruff (0.11.9)
4-4:
json
imported but unusedRemove unused import:
json
(F401)
10-11
: Drop thepsutil
memory logging or add it to test deps.
psutil
is not a declared dependency and may be missing in the runner image, causing ImportErrors.
If the memory print isn’t essential to validating behaviour, remove the import and prints; otherwise addpsutil
todev-requirements.txt
.Also applies to: 118-118
70-78
: 120 s page timeouts will slow the suite.Two-minute timeouts per crawl can stall CI for several minutes. Unless truly necessary, lower to something like 15–30 s and override only in edge-case tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_browser.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.11.9)
tests/test_browser.py
4-4: json
imported but unused
Remove unused import: json
(F401)
🔇 Additional comments (1)
tests/test_browser.py (1)
57-66
: Consider exercisingarun_many
to reproduce the original concurrency bug.The regression happened during concurrent execution; sequential
arun
calls may not catch it.
Spawn multiple coroutines with the same persistent session and awaitcrawler.arun_many(...)
to ensure no crash occurs whencontext.pages
is empty under concurrency.
) | ||
|
||
# JSON extraction schema for testing | ||
schema = { | ||
"name": "Test Items", | ||
"baseSelector": "div", | ||
"fields": [ | ||
{"name": "title", "selector": "h1", "type": "text"}, | ||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"} | ||
] | ||
} | ||
|
||
# Crawler configuration | ||
crawler_config = CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="test_persistent_session", | ||
wait_for="css:body", | ||
simulate_user=True, | ||
page_timeout=120000 | ||
) | ||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler: | ||
try: | ||
# Test 1: Initial crawl with persistent context | ||
print("\nTest 1: Initial crawl with persistent context") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=crawler_config | ||
) | ||
print("Initial Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 2: Multiple crawls to test session reuse | ||
print("\nTest 2: Multiple crawls to test persistent context") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=crawler_config | ||
) | ||
print("Second Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 3: Crawl a dynamic site with JavaScript | ||
print("\nTest 3: Crawl dynamic site with persistent context") | ||
result = await crawler.arun( | ||
url="https://www.kidocode.com/degrees/technology", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="test_persistent_session", | ||
js_code="""document.querySelectorAll('a').forEach(a => a.click());""", | ||
wait_for="css:body", | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Dynamic Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 4: Additional crawl to verify session persistence | ||
print("\nTest 4: Additional crawl to verify session persistence") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="test_persistent_session", | ||
wait_for="css:body", | ||
simulate_user=True, | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Additional Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 5: Crawl with new session | ||
print("\nTest 5: Crawl with new session") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="new_session", | ||
wait_for="css:body", | ||
simulate_user=True, | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Crawl with new session Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
print(f"Final Memory Usage: {process.memory_info().rss // (1024 * 1024)} MB") | ||
except Exception as e: | ||
print(f"Error during test: {str(e)}") | ||
raise | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add real assertions – prints alone don’t validate the fix.
The test currently passes as long as no exception bubbles up; it never checks that the crawler actually reused / recreated pages as intended.
Add assertions such as:
assert result is not None
assert len(result.links) > 0
assert crawler._browser_context.pages
length grows / stays constant according to expectations
Without these, the test won’t fail if the regression resurfaces.
Consider also asserting that the same page
instance is reused between Test 1 and Test 2, and that a fresh one appears for the new session.
🤖 Prompt for AI Agents
In tests/test_browser.py between lines 8 and 122, the test uses print statements
but lacks real assertions to verify correct behavior. Add assertions after each
crawl to check that the result is not None, that the number of extracted links
is greater than zero, and that the browser context's pages list length behaves
as expected (e.g., remains constant for the same session and increases for a new
session). Also, assert that the same page instance is reused between Test 1 and
Test 2, and that a new page instance is created for the new session in Test 5.
This will ensure the test properly validates session reuse and page creation
behavior.
print("\nTest 1: Initial crawl with persistent context") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=crawler_config | ||
) | ||
print("Initial Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 2: Multiple crawls to test session reuse | ||
print("\nTest 2: Multiple crawls to test persistent context") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=crawler_config | ||
) | ||
print("Second Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 3: Crawl a dynamic site with JavaScript | ||
print("\nTest 3: Crawl dynamic site with persistent context") | ||
result = await crawler.arun( | ||
url="https://www.kidocode.com/degrees/technology", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="test_persistent_session", | ||
js_code="""document.querySelectorAll('a').forEach(a => a.click());""", | ||
wait_for="css:body", | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Dynamic Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 4: Additional crawl to verify session persistence | ||
print("\nTest 4: Additional crawl to verify session persistence") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="test_persistent_session", | ||
wait_for="css:body", | ||
simulate_user=True, | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Additional Crawl Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") | ||
|
||
# Test 5: Crawl with new session | ||
print("\nTest 5: Crawl with new session") | ||
result = await crawler.arun( | ||
url="https://example.com", | ||
config=CrawlerRunConfig( | ||
cache_mode=CacheMode.BYPASS, | ||
extraction_strategy=JsonCssExtractionStrategy(schema), | ||
session_id="new_session", | ||
wait_for="css:body", | ||
simulate_user=True, | ||
page_timeout=120000 | ||
) | ||
) | ||
print("Crawl with new session Success!") | ||
print(f"Extracted JSON: {result.extracted_content[:300]}") | ||
print(f"Links: {len(result.links)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Relying on external sites will make the test flaky.
https://example.com
and https://www.kidocode.com/...
are outside your control; network hiccups or site changes will break CI.
Host minimal HTML fixtures in-repo or spin up a tiny HTTP server during the test instead.
-url="https://example.com",
+url=f"http://localhost:{port}/example_fixture.html",
Same for the dynamic JS page—serve a local file with the required anchors and script.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/test_browser.py between lines 48 and 116, the tests rely on external
websites like https://example.com and https://www.kidocode.com, which can cause
flaky tests due to network issues or site changes. To fix this, replace these
external URLs with local minimal HTML fixtures stored within the repository or
start a lightweight local HTTP server during the test to serve these fixtures.
Also, for the dynamic JavaScript test, serve a local HTML file containing the
necessary anchors and scripts instead of relying on an external site.
I also hit this exact same bug today while building a documentation scraper that processes multiple URLs in sequence. This fix is exactly right. This explains why browser_mode="builtin" (which internally forces use_managed_browser=True) was hitting this issue even without explicit session management. I temporarily worked around it by removing browser_mode="builtin" from BrowserConfig, this confirmed the managed browser context reuse was the root cause. Your testing with arun_many matches my use case perfectly. Any timeline on getting this merged? This is blocking multi-URL batch processing workflows. |
Fixes a crash that occurs when using use_managed_browser=True and the context.pages list is empty during concurrent execution.
Previously, context.pages[0] was accessed directly without checking if the list was empty, which led to a list index out of range or context-closed error.
This fix ensures that a new page is created if no pages exist.
Fixes #1198
crawl4ai/browser_manager.py
→ Updated get_page() method to safely handle the case when no pages exist in a context by checking if context.pages is empty and creating a new page when needed.
Ran the crawler with use_persistent_context=True and multiple URLs using arun_many.
Confirmed that:
No crashes occurred when context.pages was empty.
New pages were created when needed.
Existing pages were reused when possible.
Summary by CodeRabbit