Littletable

Latest version: v2.2.5

Safety actively analyzes 630523 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 7

2.1.2

-------------
- Added `json_encoder` argument to `Table.json_export`, so that custom data
fields can get exported without raising `JSONEncodeError`. `peps.py` example
has been modified to demonstrate this. The `json_encoder` argument can
take a single `JSONEncoder` subclass, or a tuple of subclasses, to be tried
in sequence. Each should follow the pattern given in the online Python
docs for the json module. (See updated code in examples/peps.py to see a
custom `JSONEncoder`.)

Also added `json_decoder` argument to `Table.json_import`, though it only
supports passing a single class.

2.1.1

-------------
- Added `as_table` argument to text search functions, to return search results
as a table instead of as a list of (record, score) tuples. If `as_table` is
True and table records are of a type that will accept new attributes, each
record's score and optional match words are added as
`<search_attribute>_search_score` and `<search_attribute>_search_words` fields.

New example `peps.py` showcases some of these new JSON and full-text search,
features, using PEP data gathered from python.org.

- New example `future_import_features.py` peeks into the `__future__` module
to list out all the features that are defined, and their related metadata.

- Added docstring and annotations for generated `table.search.<search_attr>` methods.

- Added docstring for generated `table.by.<index_attr>` methods, and more explanation
in `create_index()` docstring on how to use indexed fields.

- Passing an unknown path element in `Table.json_import(path=path)` now raises
`KeyError` instead of unhelpful `TypeError`.

2.1.0

-------------
- BREAKING CHANGES:

- littletable drops support for Python 3.6.

- `Table.json_import()` and `Table.json_export()` now default to non-streamed JSON.
Code that uses these methods in streaming mode must now call them with the
new `streaming=True` argument.

- Fixed type annotations for Table indexes, and verified type subclassing.

For this table:

tbl = lt.Table()
tbl.create_index("idx")

The following `isinstance()` tests are True:

Object collections.abc Abstract Base Classes
───────────────────────────────────────────────────
tbl Callable
Sized
Iterable
Container
Collection
Reversible
Sequence
tbl.by.idx Mapping

- `Table.csv_export()`, `tsv_export()`, and `json_export()`, if called with None as
the output destination (None is now the default), will return a string
containing the exported data.

print first 10 rows of my_table as CSV data
print(my_table[:10].csv_export())

- `Table.json_export()` takes an optional parameter, `streaming` to control
whether the resulting JSON is a single JSON list element (if `streaming` is False),
or a separate JSON element per Table item (if `streaming` is True); the default
value is False. `streaming` is useful when passing data over a streaming protocol,
so that the Table contents can be unmarshaled separately on the receiving end.

- `Table.json_import()` takes two optional parameters:

- `streaming` to indicate that the input stream contains multiple JSON objects
(if streaming=True), or a single JSON list of objects (if streaming=False);
defaults to False
- `path`, a dot-delimited path of keys to read a list of JSON objects from a
sub-element of the input JSON text (only valid if streaming=False); defaults
to ""

2.0.7

-------------
- Added support for sliced indexing into `Table` indexes, as a simple
form of range selection and filtering.

employees.where(salary=Table.ge(50000))
employees.create_index("salary")
employees.by.salary[50000:]

Unlike Python list slices, `Table` index slices can use non-integer data
types (as long as they support `>=` and `<` comparison operations):

jan_01 = datetime.date(2000, 1, 1)
apr_01 = datetime.date(2000, 4, 1)

first_qtr_sales = sales.where(date=Table.in_range(jan_01, apr_01))
sales.create_index("date")
first_qtr_sales = sales.by.date[jan_01: apr_01]

Slices with a step field (as in `[start : stop : step]`) are not supported.

See full example code in examples/sliced_indexing.py.

- Added new transform methods for importing timestamps as part of
CSV's.

- Table.parse_datetime(pattern, empty, on_error)
- Table.parse_date(pattern, empty, on_error)
- Table.parse_timedelta(pattern, reference_time, empty, on_error)

Each takes a pattern as would be used for `datetime.strptime()``, plus
optional values for empty inputs (default='') or error inputs
(default=None). `parse_timedelta` also takes a reference_time argument
to compute the resulting timedelta - default is 00:00:00.

See full example code in examples/time_conversions.py.

- `as_html()` now accepts an optional dict argument `table_properties`, to add HTML
`<table>`-level attributes to generated HTML:

tbl = lt.Table().csv_import("""\
a,b,c
1,2,3
4,5,6
""")
html = tbl.as_html(fields="a b c", table_properties={"border": 1, "cellpadding": 5}

- Workaround issue when running `Table.present()` in a terminal environment that does not support `isatty()`:

AttributeError: 'OutputCtxManager' object has no attribute 'isatty'

2.0.6

-------------
- Simplified `Table.where()` when a filtering method takes a single
value as found in a record attribute.

For example, to find the odd `a` values in a Table, you would
previously write:

tbl.where(lambda rec: is_odd(rec.a))

Now you can write:

tbl.where(a=is_odd)

- The `Table.re_match` comparator is deprecated, and can be replaced with
this form:

word_str = "DOT"

test if word_str is in field 'name' - DEPRECATED
tbl.where(name=lt.Table.re_match(rf".*?\b{word_str}\b"))

test if word_str is in field 'name'
contains_word = re.compile(rf"\b{word_str}\b").search
tbl.where(name=contains_word)

See the `explore_unicode.py` example (line 185).

`Table.re_match` will be removed in a future release.

- Added helper method `Table.convert_numeric` to simplify converting
imported CSV values from str to int or float, or to replace empty
values with placeholders such as "n/a". Use as a transform in the
`transforms` dict argument of `Table.csv_import`.

- Added example `nfkc_normalization.py`, to show Unicode characters
that normalize to ASCII in Python scripts.

- Fixed internal bugs if using `groupby` when calling `Table.as_html()`.

- Added tests when using `__slots__` defined using a dict (feature
added in Python 3.8 to attach docstrings to attributes defined
using `__slots__`).

2.0.5

-------------
- Added support for import/export to local Excel .xlsx spreadsheets.

tbl = Table().excel_import("data_table.xlsx")

Requires installation of openpyxl to do the spreadsheet handling.
PR submitted by Brunno Vanelli, very nice work, thanks!

A simple example script examples/excel_data_types.py has been added
to show how the data types of values are preserved or converted to
standard Python types when importing from Excel.

- Fixed count(), index(), remove(), and remove_many() to accept dict
objects. Identified while addressing issue 3.

- Fixed default of pop() to be -1.

- Added test cases for storing objects created using typing.NamedTuple.

Page 2 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.