Turbodbc

Latest version: v4.12.0

Safety actively analyzes 629765 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 11

2.5.0

-------------

* Added an option to ``fetchallarrow()`` that fetches integer columns in the
smallest possible integer type the retrieved values fit in. While this
reduces the memory footprint of the resulting table, the schema of the
table is now dependent on the data it contains.
* Updated Apache Arrow support to work with version 0.8.x

2.4.1

-------------

* Fixed a memory leak on ``fetchallarrow()`` that increased the reference
count of the returned table by one too much.

2.4.0

-------------

* Added support for Apache Arrow ``pyarrow.Table`` objects as the input for
``executemanycolumns()``. In addition to direct Arrow support, this
should also help with more graceful handling of Pandas DataFrames
as ``pa.Table.from_pandas(...)`` handles additional corner cases of
Pandas data structures. Big thanks to xhochy!

2.3.0

-------------

* Added an option to ``fetchallarrow()`` that enables the fetching of string
columns as dictionary-encoded string columns. In most cases, this increases
performance and reduces RAM usage. Arrow columns of type ``dictionary[string]``
will result in ``pandas.Categorical`` columns on conversion.
* Updated pybind11 dependency to version 2.2+
* Fixed a symbol visibility issue when building Arrow unit tests on systems
that hide symbols by default.

2.2.0

-------------

* Added new keyword argument ``large_decimals_as_64_bit_types`` to
``make_options()``. If set to ``True``, decimals with more than ``18``
digits will be retrieved as 64 bit integers or floats as appropriate.
The default retains the previous behavior of returning strings.
* Added support for ``datetime64[ns]`` data type for ``executemanycolumns()``.
This is particularly helpful when dealing with `pandas <https://pandas.pydata.org>`_
``DataFrame`` objects, since this is the type that contains time stamps.
* Added the keyword argument ``limit_varchar_results_to_max`` to ``make_options()``. This
allows to truncate ``VARCHAR(n)`` fields to ``varchar_max_character_limit``
characters, see the next item.
* Added possibility to enforce NumPy and Apache Arrow requirements using extra requirements
during installation: ``pip install turbodbc[arrow,numpy]``
* Updated Apache Arrow support to work with version 0.6.x
* Fixed an issue with retrieving result sets with ``VARCHAR(max)`` fields and
similar types. The size of the buffer allocated for such fields can be controlled
with the ``varchar_max_character_limit`` option to ``make_options()``.
* Fixed an `issue with some versions of Boost <https://svn.boost.org/trac10/ticket/3471>`_
that lead to problems with ``datetime64[us]`` columns with ``executemanycolumns()``.
An overflow when converting microseconds since 1970 to a database-readable timestamp
could happen, badly garbling the timestamps in the process. The issue was
surfaced with Debian 7's Boost version (1.49), although the Boost
issue was allegedly fixed with version 1.43.
* Fixed an issue that lead to undefined behavior when character sequences
could not be decoded into Unicode code points. The new (and defined) behavior
is to ignore the offending character sequences completely.

2.1.0

-------------

* Added new method ``cursor.executemanycolumns()`` that accepts parameters
in columnar fashion as a list of NumPy (masked) arrays.
* CMake build now supports ``conda`` environments
* CMake build offers ``DISABLE_CXX11_ABI`` option to fix linking issues
with ``pyarrow`` on systems with the new C++11 compliant ABI enabled

Page 7 of 11

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.