Skip to content

connection issues with large datasets from MySQL #928

Closed
@ConstantinoSchillebeeckx

Description

The default pymysql cursor used here is Cursor; as I understand it, this cursor will fetch all data even thought we might call fetchmany.

A more appropriate cursor to use when batch reading might be SSCursor.

Unbuffered Cursor, mainly useful for queries that return a lot of data, or for connections to remote servers over a slow network.

Instead of copying every row of data into a buffer, this will fetch rows as needed. The upside of this is the client uses much less memory, and rows are returned much faster when traveling over a slow network or if the result set is very big.

Would it make sense to expose a cursor kwarg here so that we have control over which cursor is used? Or we could be cheeky and swap out for that cursor if read_sql_query is called with a non-zero chunksize?

Either way, I'm open to submitting a PR if it isn't a waste of your time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    minor releaseWill be addressed in the next minor releasequestionFurther information is requestedready to release

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions