tinyms.data

class tinyms.data.UnalignedDataset(dataset_path, phase, max_dataset_size=inf, shuffle=True)[源代码]

This dataset class can load unaligned/unpaired datasets.

参数
  • dataset_path (str) – The path of images (should have subfolders trainA, trainB, testA, testB, etc).

  • phase (str) – Train or test. It requires two directories in dataset_path, like trainA and trainB to. host training images from domain A ‘{dataset_path}/trainA’ and from domain B ‘{dataset_path}/trainB’ respectively.

  • max_dataset_size (int) – Maximum number of return image paths.

返回

Two domain image path list.

class tinyms.data.GanImageFolderDataset(dataset_path, max_dataset_size=inf)[源代码]

This dataset class can load images from image folder.

参数
  • dataset_path (str) – ‘{dataset_path}/testA’, ‘{dataset_path}/testB’, etc.

  • max_dataset_size (int) – Maximum number of return image paths.

返回

Image path list.

class tinyms.data.ImdbDataset(imdb_path, glove_path, embed_size=300)[源代码]

parse aclImdb data to features and labels. sentence->tokenized->encoded->padding->features

参数
  • imdb_path (str) – The path where the aclImdb dataset stored.

  • glove_path (str) – The path where the GloVe stored.

  • embed_size (int) – Embed_size. Default: 300.

实际案例

>>> from tinyms.data import ImdbDataset
>>>
>>> imdb_ds = ImdbDataset('./aclImdb', './glove')
convert_to_mindrecord(preprocess_path, shard_num=1)[源代码]

convert imdb dataset to mindrecoed dataset

get_datas(seg)[源代码]

get features, labels, and weight by gensim.

parse()[源代码]

parse imdb data to memory

class tinyms.data.BertDataset(data_dir, schema_dir=None, shuffle=True, num_parallel_workers=None)[源代码]

This dataset class can load bert from data folder.

参数
  • data_dir (str) – ‘{data_dir}/result1.tfrecord’, ‘{data_dir}/result2.tfrecord’, etc.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • schema (Union[str, Schema], optional) – Path to the JSON schema file or schema object (default=None). If the schema is not provided, the meta data from the TFData file is considered the schema.

实际案例

>>> from tinyms.data import BertDataset
>>>
>>> bert_ds = BertDataset('data')
class tinyms.data.DistributedSampler(dataset_size, num_replicas=None, rank=None, shuffle=True)[源代码]

Distributed sampler.

参数
  • dataset_size (int) – Dataset list length

  • num_replicas (int) – Replicas num.

  • rank (int) – Device rank.

  • shuffle (bool) – Whether the dataset needs to be shuffled. Default: True.

返回

DistributedSampler instance.

class tinyms.data.CelebADataset(dataset_dir, num_parallel_workers=None, shuffle=None, usage='all', sampler=None, decode=False, extensions=None, num_samples=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing CelebA dataset. Only support to read list_attr_celeba.txt currently, which is the attribute annotations of the dataset.

注解

The generated dataset has two columns [‘image’, ‘attr’]. The image tensor is of the uint8 type. The attribute tensor is of the uint32 type and one hot encoded.

Citation of CelebA dataset.

@article{DBLP:journals/corr/LiuLWT14,
author    = {Ziwei Liu and Ping Luo and Xiaogang Wang and Xiaoou Tang},
title     = {Deep Learning Face Attributes in the Wild},
journal   = {CoRR},
volume    = {abs/1411.7766},
year      = {2014},
url       = {http://arxiv.org/abs/1411.7766},
archivePrefix = {arXiv},
eprint    = {1411.7766},
timestamp = {Tue, 10 Dec 2019 15:37:26 +0100},
biburl    = {https://dblp.org/rec/journals/corr/LiuLWT14.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
howpublished = {http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html},
description  = {CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset
                with more than 200K celebrity images, each with 40 attribute annotations.
                The images in this dataset cover large pose variations and background clutter.
                CelebA has large diversities, large quantities, and rich annotations, including
                * 10,177 number of identities,
                * 202,599 number of face images, and
                * 5 landmark locations, 40 binary attributes annotations per image.
                The dataset can be employed as the training and test sets for the following computer
                vision tasks: face attribute recognition, face detection, landmark (or facial part)
                localization, and face editing & synthesis.}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, will use value set in the config).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None).

  • usage (str) – one of ‘all’, ‘train’, ‘valid’ or ‘test’ (default=’all’, will read all samples).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None).

  • decode (bool, optional) – decode the images after reading (default=False).

  • extensions (list[str], optional) – List of file extensions to be included in the dataset (default=None).

  • num_samples (int, optional) – The number of images to be included in the dataset (default=None, will include all images).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

实际案例

>>> celeba_dataset_dir = "/path/to/celeba_dataset_directory"
>>> dataset = ds.CelebADataset(dataset_dir=celeba_dataset_dir, usage='train')
class tinyms.data.Cifar100Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing Cifar100 dataset.

The generated dataset has three columns [‘image’, ‘coarse_label’, ‘fine_label’]. The type of the image tensor is uint8. The coarse and fine labels are each a scalar uint32 tensor. This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Citation of Cifar100 dataset.

@techreport{Krizhevsky09,
author       = {Alex Krizhevsky},
title        = {Learning multiple layers of features from tiny images},
institution  = {},
year         = {2009},
howpublished = {http://www.cs.toronto.edu/~kriz/cifar.html},
description  = {This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images
                each. There are 500 training images and 100 testing images per class. The 100 classes in
                the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the
                class to which it belongs) and a "coarse" label (the superclass to which it belongs).}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be “train”, “test” or “all” . “train” will read from 50,000 train samples, “test” will read from 10,000 test samples, “all” will read from all 60,000 samples. (default=None, all samples)

  • num_samples (int, optional) – The number of images to be included in the dataset. (default=None, all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> cifar100_dataset_dir = "/path/to/cifar100_dataset_directory"
>>>
>>> # 1) Get all samples from CIFAR100 dataset in sequence
>>> dataset = ds.Cifar100Dataset(dataset_dir=cifar100_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from CIFAR100 dataset
>>> dataset = ds.Cifar100Dataset(dataset_dir=cifar100_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # In CIFAR100 dataset, each dictionary has 3 keys: "image", "fine_label" and "coarse_label"
class tinyms.data.Cifar10Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing Cifar10 dataset.

The generated dataset has two columns [‘image’, ‘label’]. The type of the image tensor is uint8. The label is a scalar uint32 tensor. This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Citation of Cifar10 dataset.

@techreport{Krizhevsky09,
author       = {Alex Krizhevsky},
title        = {Learning multiple layers of features from tiny images},
institution  = {},
year         = {2009},
howpublished = {http://www.cs.toronto.edu/~kriz/cifar.html},
description  = {The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes,
                with 6000 images per class. There are 50000 training images and 10000 test images.}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be “train”, “test” or “all” . “train” will read from 50,000 train samples, “test” will read from 10,000 test samples, “all” will read from all 60,000 samples. (default=None, all samples)

  • num_samples (int, optional) – The number of images to be included in the dataset. (default=None, all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> cifar10_dataset_dir = "/path/to/cifar10_dataset_directory"
>>>
>>> # 1) Get all samples from CIFAR10 dataset in sequence
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from CIFAR10 dataset
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from CIFAR10 dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, num_shards=2, shard_id=0)
>>>
>>> # In CIFAR10 dataset, each dictionary has keys "image" and "label"
class tinyms.data.CLUEDataset(dataset_files, task='AFQMC', usage='train', num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset that reads and parses CLUE datasets. CLUE, the Chinese Language Understanding Evaluation Benchmark, is a collection of datasets, baselines, pre-trained models, corpus and leaderboard. Supported CLUE classification tasks: ‘AFQMC’, ‘TNEWS’, ‘IFLYTEK’, ‘CMNLI’, ‘WSC’ and ‘CSL’.

Citation of CLUE dataset.

@article{CLUEbenchmark,
title   = {CLUE: A Chinese Language Understanding Evaluation Benchmark},
author  = {Liang Xu, Xuanwei Zhang, Lu Li, Hai Hu, Chenjie Cao, Weitang Liu, Junyi Li, Yudong Li,
           Kai Sun, Yechen Xu, Yiming Cui, Cong Yu, Qianqian Dong, Yin Tian, Dian Yu, Bo Shi, Jun Zeng,
           Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou,
           Shaoweihua Liu, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Zhenzhong Lan},
journal = {arXiv preprint arXiv:2004.05986},
year    = {2020},
howpublished = {https://github.com/CLUEbenchmark/CLUE},
description  = {CLUE, a Chinese Language Understanding Evaluation benchmark. It contains eight different
                tasks, including single-sentence classification, sentence pair classification, and machine
                reading comprehension.}
}
参数
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • task (str, optional) – The kind of task, one of ‘AFQMC’, ‘TNEWS’, ‘IFLYTEK’, ‘CMNLI’, ‘WSC’ and ‘CSL’. (default=AFQMC).

  • usage (str, optional) – Need train, test or eval data (default=”train”).

  • num_samples (int, optional) – Number of samples (rows) to read (default=None, reads the full dataset).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

实际案例

>>> clue_dataset_dir = ["/path/to/clue_dataset_file"] # contains 1 or multiple clue files
>>> dataset = ds.CLUEDataset(dataset_files=clue_dataset_dir, task='AFQMC', usage='train')
class tinyms.data.CocoDataset(dataset_dir, annotation_file, task='Detection', num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing COCO dataset.

CocoDataset supports four kinds of tasks, which are Object Detection, Keypoint Detection, Stuff Segmentation and Panoptic Segmentation of 2017 Train/Val/Test dataset.

The generated dataset has multi-columns :

  • task=’Detection’, column: [[‘image’, dtype=uint8], [‘bbox’, dtype=float32], [‘category_id’, dtype=uint32], [‘iscrowd’, dtype=uint32]].

  • task=’Stuff’, column: [[‘image’, dtype=uint8], [‘segmentation’,dtype=float32], [‘iscrowd’,dtype=uint32]].

  • task=’Keypoint’, column: [[‘image’, dtype=uint8], [‘keypoints’, dtype=float32], [‘num_keypoints’, dtype=uint32]].

  • task=’Panoptic’, column: [[‘image’, dtype=uint8], [‘bbox’, dtype=float32], [‘category_id’, dtype=uint32], [‘iscrowd’, dtype=uint32], [‘area’, dtype=uint32]].

This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. CocoDataset doesn’t support PKSampler. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Citation of Coco dataset.

@article{DBLP:journals/corr/LinMBHPRDZ14,
author        = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and
                 Lubomir D. Bourdev and  Ross B. Girshick and James Hays and
                 Pietro Perona and Deva Ramanan and Piotr Doll{'{a}}r and C. Lawrence Zitnick},
title         = {Microsoft {COCO:} Common Objects in Context},
journal       = {CoRR},
volume        = {abs/1405.0312},
year          = {2014},
url           = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint        = {1405.0312},
timestamp     = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl        = {https://dblp.org/rec/journals/corr/LinMBHPRDZ14.bib},
bibsource     = {dblp computer science bibliography, https://dblp.org},
description   = {COCO is a large-scale object detection, segmentation, and captioning dataset.
                 It contains 91 common object categories with 82 of them having more than 5,000
                 labeled instances. In contrast to the popular ImageNet dataset, COCO has fewer
                 categories but more instances per category.}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • annotation_file (str) – Path to the annotation JSON.

  • task (str) – Set the task type for reading COCO data. Supported task types: ‘Detection’, ‘Stuff’, ‘Panoptic’ and ‘Keypoint’ (default=’Detection’).

  • num_samples (int, optional) – The number of images to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the configuration file).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • decode (bool, optional) – Decode the images after reading (default=False).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If parse JSON file failed.

  • ValueError – If task is not in [‘Detection’, ‘Stuff’, ‘Panoptic’, ‘Keypoint’].

  • ValueError – If annotation_file is not exist.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> coco_dataset_dir = "/path/to/coco_dataset_directory/images"
>>> coco_annotation_file = "/path/to/coco_dataset_directory/annotation_file"
>>>
>>> # 1) Read COCO data for Detection task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Detection')
>>>
>>> # 2) Read COCO data for Stuff task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Stuff')
>>>
>>> # 3) Read COCO data for Panoptic task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Panoptic')
>>>
>>> # 4) Read COCO data for Keypoint task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Keypoint')
>>>
>>> # In COCO dataset, each dictionary has keys "image" and "annotation"
get_class_indexing()[源代码]

Get the class index.

返回

dict, a str-to-list<int> mapping from label name to index

class tinyms.data.CSVDataset(dataset_files, field_delim=', ', column_defaults=None, column_names=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset that reads and parses comma-separated values (CSV) datasets.

参数
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • field_delim (str, optional) – A string that indicates the char delimiter to separate fields (default=’,’).

  • column_defaults (list, optional) – List of default values for the CSV field (default=None). Each item in the list is either a valid type (float, int, or string). If this is not provided, treats all columns as string type.

  • column_names (list[str], optional) – List of column names of the dataset (default=None). If this is not provided, infers the column_names from the first row of CSV file.

  • num_samples (int, optional) – Number of samples (rows) to read (default=None, reads the full dataset).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

实际案例

>>> csv_dataset_dir = ["/path/to/csv_dataset_file"] # contains 1 or multiple csv files
>>> dataset = ds.CSVDataset(dataset_files=csv_dataset_dir, column_names=['col1', 'col2', 'col3', 'col4'])
class tinyms.data.GeneratorDataset(source, column_names=None, column_types=None, schema=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None, python_multiprocessing=True)[源代码]

A source dataset that generates data from Python by invoking Python data source each epoch.

This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

参数
  • source (Union[Callable, Iterable, Random Accessible]) – A generator callable object, an iterable Python object or a random accessible Python object. Callable source is required to return a tuple of NumPy arrays as a row of the dataset on source().next(). Iterable source is required to return a tuple of NumPy arrays as a row of the dataset on iter(source).next(). Random accessible source is required to return a tuple of NumPy arrays as a row of the dataset on source[idx].

  • column_names (Union[str, list[str]], optional) – List of column names of the dataset (default=None). Users are required to provide either column_names or schema.

  • column_types (list[mindspore.dtype], optional) – List of column data types of the dataset (default=None). If provided, sanity check will be performed on generator output.

  • schema (Union[Schema, str], optional) – Path to the JSON schema file or schema object (default=None). Users are required to provide either column_names or schema. If both are provided, schema will be used.

  • num_samples (int, optional) – The number of samples to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of subprocesses used to fetch the dataset in parallel (default=1).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Random accessible input is required. (default=None, expected order behavior shown in the table).

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Random accessible input is required (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). Random accessible input is required. When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument must be specified only when num_shards is also specified. Random accessible input is required.

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker process. This option could be beneficial if the Python operation is computational heavy (default=True).

实际案例

>>> import numpy as np
>>>
>>> # 1) Multidimensional generator function as callable input.
>>> def generator_multidimensional():
...     for i in range(64):
...         yield (np.array([[i, i + 1], [i + 2, i + 3]]),)
>>>
>>> dataset = ds.GeneratorDataset(source=generator_multidimensional, column_names=["multi_dimensional_data"])
>>>
>>> # 2) Multi-column generator function as callable input.
>>> def generator_multi_column():
...     for i in range(64):
...         yield np.array([i]), np.array([[i, i + 1], [i + 2, i + 3]])
>>>
>>> dataset = ds.GeneratorDataset(source=generator_multi_column, column_names=["col1", "col2"])
>>>
>>> # 3) Iterable dataset as iterable input.
>>> class MyIterable:
...     def __init__(self):
...         self._index = 0
...         self._data = np.random.sample((5, 2))
...         self._label = np.random.sample((5, 1))
...
...     def __next__(self):
...         if self._index >= len(self._data):
...             raise StopIteration
...         else:
...             item = (self._data[self._index], self._label[self._index])
...             self._index += 1
...             return item
...
...     def __iter__(self):
...         self._index = 0
...         return self
...
...     def __len__(self):
...         return len(self._data)
>>>
>>> dataset = ds.GeneratorDataset(source=MyIterable(), column_names=["data", "label"])
>>>
>>> # 4) Random accessible dataset as random accessible input.
>>> class MyAccessible:
...     def __init__(self):
...         self._data = np.random.sample((5, 2))
...         self._label = np.random.sample((5, 1))
...
...     def __getitem__(self, index):
...         return self._data[index], self._label[index]
...
...     def __len__(self):
...         return len(self._data)
>>>
>>> dataset = ds.GeneratorDataset(source=MyAccessible(), column_names=["data", "label"])
>>>
>>> # list, dict, tuple of Python is also random accessible
>>> dataset = ds.GeneratorDataset(source=[(np.array(0),), (np.array(1),), (np.array(2),)], column_names=["col"])
class tinyms.data.GraphData(dataset_file, num_parallel_workers=None, working_mode='local', hostname='127.0.0.1', port=50051, num_client=1, auto_shutdown=True)[源代码]

Reads the graph dataset used for GNN training from the shared file and database.

参数
  • dataset_file (str) – One of file names in the dataset.

  • num_parallel_workers (int, optional) – Number of workers to process the dataset in parallel (default=None).

  • working_mode (str, optional) –

    Set working mode, now supports ‘local’/’client’/’server’ (default=’local’).

    • ’local’, used in non-distributed training scenarios.

    • ’client’, used in distributed training scenarios. The client does not load data, but obtains data from the server.

    • ’server’, used in distributed training scenarios. The server loads the data and is available to the client.

  • hostname (str, optional) – Hostname of the graph data server. This parameter is only valid when working_mode is set to ‘client’ or ‘server’ (default=’127.0.0.1’).

  • port (int, optional) – Port of the graph data server. The range is 1024-65535. This parameter is only valid when working_mode is set to ‘client’ or ‘server’ (default=50051).

  • num_client (int, optional) – Maximum number of clients expected to connect to the server. The server will allocate resources according to this parameter. This parameter is only valid when working_mode is set to ‘server’ (default=1).

  • auto_shutdown (bool, optional) – Valid when working_mode is set to ‘server’, when the number of connected clients reaches num_client and no client is being connected, the server automatically exits (default=True).

实际案例

>>> graph_dataset_dir = "/path/to/graph_dataset_file"
>>> graph_dataset = ds.GraphData(dataset_file=graph_dataset_dir, num_parallel_workers=2)
>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> features = graph_dataset.get_node_feature(node_list=nodes, feature_types=[1])
get_all_edges(edge_type)[源代码]

Get all edges in the graph.

参数

edge_type (int) – Specify the type of edge.

返回

numpy.ndarray, array of edges.

实际案例

>>> edges = graph_dataset.get_all_edges(edge_type=0)
引发

TypeError – If edge_type is not integer.

get_all_neighbors(node_list, neighbor_type)[源代码]

Get neighbor_type neighbors of the nodes in node_list.

参数
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_type (int) – Specify the type of neighbor.

返回

numpy.ndarray, array of neighbors.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> neighbors = graph_dataset.get_all_neighbors(node_list=nodes, neighbor_type=2)
引发
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_type is not integer.

get_all_nodes(node_type)[源代码]

Get all nodes in the graph.

参数

node_type (int) – Specify the type of node.

返回

numpy.ndarray, array of nodes.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
引发

TypeError – If node_type is not integer.

get_edge_feature(edge_list, feature_types)[源代码]

Get feature_types feature of the edges in edge_list.

参数
返回

numpy.ndarray, array of features.

实际案例

>>> edges = graph_dataset.get_all_edges(edge_type=0)
>>> features = graph_dataset.get_edge_feature(edge_list=edges, feature_types=[1])
引发
  • TypeError – If edge_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_neg_sampled_neighbors(node_list, neg_neighbor_num, neg_neighbor_type)[源代码]

Get neg_neighbor_type negative sampled neighbors of the nodes in node_list.

参数
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neg_neighbor_num (int) – Number of neighbors sampled.

  • neg_neighbor_type (int) – Specify the type of negative neighbor.

返回

numpy.ndarray, array of neighbors.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> neg_neighbors = graph_dataset.get_neg_sampled_neighbors(node_list=nodes, neg_neighbor_num=5,
...                                                         neg_neighbor_type=2)
引发
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neg_neighbor_num is not integer.

  • TypeError – If neg_neighbor_type is not integer.

get_node_feature(node_list, feature_types)[源代码]

Get feature_types feature of the nodes in node_list.

参数
返回

numpy.ndarray, array of features.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> features = graph_dataset.get_node_feature(node_list=nodes, feature_types=[2, 3])
引发
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_nodes_from_edges(edge_list)[源代码]

Get nodes from the edges.

参数

edge_list (Union[list, numpy.ndarray]) – The given list of edges.

返回

numpy.ndarray, array of nodes.

引发

TypeError – If edge_list is not list or ndarray.

get_sampled_neighbors(node_list, neighbor_nums, neighbor_types, strategy=<SamplingStrategy.RANDOM: 0>)[源代码]

Get sampled neighbor information.

The api supports multi-hop neighbor sampling. That is, the previous sampling result is used as the input of next-hop sampling. A maximum of 6-hop are allowed.

The sampling result is tiled into a list in the format of [input node, 1-hop sampling result, 2-hop samling result …]

参数
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_nums (Union[list, numpy.ndarray]) – Number of neighbors sampled per hop.

  • neighbor_types (Union[list, numpy.ndarray]) – Neighbor type sampled per hop.

  • strategy (SamplingStrategy, optional) –

    Sampling strategy (default=SamplingStrategy.RANDOM). It can be any of [SamplingStrategy.RANDOM, SamplingStrategy.EDGE_WEIGHT].

    • SamplingStrategy.RANDOM, random sampling with replacement.

    • SamplingStrategy.EDGE_WEIGHT, sampling with edge weight as probability.

返回

numpy.ndarray, array of neighbors.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> neighbors = graph_dataset.get_sampled_neighbors(node_list=nodes, neighbor_nums=[2, 2],
...                                                 neighbor_types=[2, 1])
引发
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_nums is not list or ndarray.

  • TypeError – If neighbor_types is not list or ndarray.

graph_info()[源代码]

Get the meta information of the graph, including the number of nodes, the type of nodes, the feature information of nodes, the number of edges, the type of edges, and the feature information of edges.

返回

dict, meta information of the graph. The key is node_type, edge_type, node_num, edge_num, node_feature_type and edge_feature_type.

random_walk(target_nodes, meta_path, step_home_param=1.0, step_away_param=1.0, default_node=-1)[源代码]

Random walk in nodes.

参数
  • target_nodes (list[int]) – Start node list in random walk

  • meta_path (list[int]) – node type for each walk step

  • step_home_param (float, optional) – return hyper parameter in node2vec algorithm (Default = 1.0).

  • step_away_param (float, optional) – in out hyper parameter in node2vec algorithm (Default = 1.0).

  • default_node (int, optional) – default node if no more neighbors found (Default = -1). A default value of -1 indicates that no node is given.

返回

numpy.ndarray, array of nodes.

实际案例

>>> nodes = graph_dataset.get_all_nodes(node_type=1)
>>> walks = graph_dataset.random_walk(target_nodes=nodes, meta_path=[2, 1, 2])
引发
  • TypeError – If target_nodes is not list or ndarray.

  • TypeError – If meta_path is not list or ndarray.

class tinyms.data.ImageFolderDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, extensions=None, class_indexing=None, decode=False, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset that reads images from a tree of directories.

All images within one folder have the same label. The generated dataset has two columns [‘image’, ‘label’]. The shape of the image column is [image_size] if decode flag is False, or [H,W,C] otherwise. The type of the image tensor is uint8. The label is a scalar int32 tensor. This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, set in the config).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • extensions (list[str], optional) – List of file extensions to be included in the dataset (default=None).

  • class_indexing (dict, optional) – A str-to-int mapping from folder name to index (default=None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0).

  • decode (bool, optional) – Decode the images after reading (default=False).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If class_indexing is not a dictionary.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> image_folder_dataset_dir = "/path/to/image_folder_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in image_folder_dataset_dir with 8 threads
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 num_parallel_workers=8)
>>>
>>> # 2) Read all samples (image files) from folder cat and folder dog with label 0 and 1
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 class_indexing={"cat":0, "dog":1})
>>>
>>> # 3) Read all samples (image files) in image_folder_dataset_dir with extensions .JPEG and .png (case sensitive)
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 extensions=[".JPEG", ".png"])
class tinyms.data.ManifestDataset(dataset_file, usage='train', num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, class_indexing=None, decode=False, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading images from a Manifest file.

The generated dataset has two columns [‘image’, ‘label’]. The shape of the image column is [image_size] if decode flag is False, or [H,W,C] otherwise. The type of the image tensor is uint8. The label is a scalar uint64 tensor. This dataset can take in a sampler. sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

参数
  • dataset_file (str) – File to be read.

  • usage (str, optional) – Acceptable usages include “train”, “eval” and “inference” (default=”train”).

  • num_samples (int, optional) – The number of images to be included in the dataset. (default=None, will include all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, will use value set in the config).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • class_indexing (dict, optional) – A str-to-int mapping from label name to index (default=None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0).

  • decode (bool, optional) – decode the images after reading (default=False).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, num_samples reflects the max number of samples per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If class_indexing is not a dictionary.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> manifest_dataset_dir = "/path/to/manifest_dataset_file"
>>>
>>> # 1) Read all samples specified in manifest_dataset_dir dataset with 8 threads for training
>>> dataset = ds.ManifestDataset(dataset_file=manifest_dataset_dir, usage="train", num_parallel_workers=8)
>>>
>>> # 2) Read samples (specified in manifest_file.manifest) for shard 0 in a 2-way distributed training setup
>>> dataset = ds.ManifestDataset(dataset_file=manifest_dataset_dir, num_shards=2, shard_id=0)
get_class_indexing()[源代码]

Get the class index.

返回

dict, a str-to-int mapping from label name to index.

class tinyms.data.MindDataset(dataset_file, columns_list=None, num_parallel_workers=None, shuffle=None, num_shards=None, shard_id=None, sampler=None, padded_sample=None, num_padded=None, num_samples=None)[源代码]

A source dataset for reading and parsing MindRecord dataset.

参数
  • dataset_file (Union[str, list[str]]) – If dataset_file is a str, it represents for a file name of one component of a mindrecord source, other files with identical source in the same path will be found and loaded automatically. If dataset_file is a list, it represents for a list of dataset files to be read directly.

  • columns_list (list[str], optional) – List of columns to be read (default=None).

  • num_parallel_workers (int, optional) – The number of readers (default=None).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset (default=None, performs shuffle).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, sampler is exclusive with shuffle and block_reader). Support list: SubsetRandomSampler, PkSampler, RandomSampler, SequentialSampler, DistributedSampler.

  • padded_sample (dict, optional) – Samples will be appended to dataset, where keys are the same as column_list.

  • num_padded (int, optional) – Number of padding samples. Dataset size plus num_padded should be divisible by num_shards.

  • num_samples (int, optional) – The number of samples to be included in the dataset (default=None, all samples).

引发
  • ValueError – If num_shards is specified but shard_id is None.

  • ValueError – If shard_id is specified but num_shards is None.

实际案例

>>> mind_dataset_dir = ["/path/to/mind_dataset_file"] # contains 1 or multiple MindRecord files
>>> dataset = ds.MindDataset(dataset_file=mind_dataset_dir)
class tinyms.data.MnistDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing the MNIST dataset.

The generated dataset has two columns [‘image’, ‘label’]. The type of the image tensor is uint8. The label is a scalar uint32 tensor. This dataset can take in a sampler. sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Citation of Mnist dataset.

@article{lecun2010mnist,
title        = {MNIST handwritten digit database},
author       = {LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal      = {ATT Labs [Online]},
volume       = {2},
year         = {2010},
howpublished = {http://yann.lecun.com/exdb/mnist},
description  = {The MNIST database of handwritten digits has a training set of 60,000 examples,
                and a test set of 10,000 examples. It is a subset of a larger set available from
                NIST. The digits have been size-normalized and centered in a fixed-size image.}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be “train”, “test” or “all” . “train” will read from 60,000 train samples, “test” will read from 10,000 test samples, “all” will read from all 70,000 samples. (default=None, will read all samples)

  • num_samples (int, optional) – The number of images to be included in the dataset (default=None, will read all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, will use value set in the config).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> mnist_dataset_dir = "/path/to/mnist_dataset_directory"
>>>
>>> # Read 3 samples from MNIST dataset
>>> dataset = ds.MnistDataset(dataset_dir=mnist_dataset_dir, num_samples=3)
>>>
>>> # Note: In mnist_dataset dataset, each dictionary has keys "image" and "label"
class tinyms.data.NumpySlicesDataset(data, column_names=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None)[源代码]

Creates a dataset with given data slices, mainly for loading Python data into dataset.

This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

参数
  • data (Union[list, tuple, dict]) – list, tuple, dict and other NumPy formats. Input data will be sliced along the first dimension and generate additional rows, if input is list, there will be one column in each row, otherwise there tends to be multi columns. Large data is not recommended to be loaded in this way as data is loading into memory.

  • column_names (list[str], optional) – List of column names of the dataset (default=None). If column_names is not provided, when data is dict, column_names will be its keys, otherwise it will be like column_0, column_1 …

  • num_samples (int, optional) – The number of samples to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of subprocesses used to fetch the dataset in parallel (default=1).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Random accessible input is required. (default=None, expected order behavior shown in the table).

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Random accessible input is required (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). Random accessible input is required. When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument must be specified only when num_shards is also specified. Random accessible input is required.

实际案例

>>> # 1) Input data can be a list
>>> data = [1, 2, 3]
>>> dataset = ds.NumpySlicesDataset(data=data, column_names=["column_1"])
>>>
>>> # 2) Input data can be a dictionary, and column_names will be its keys
>>> data = {"a": [1, 2], "b": [3, 4]}
>>> dataset = ds.NumpySlicesDataset(data=data)
>>>
>>> # 3) Input data can be a tuple of lists (or NumPy arrays), each tuple element refers to data in each column
>>> data = ([1, 2], [3, 4], [5, 6])
>>> dataset = ds.NumpySlicesDataset(data=data, column_names=["column_1", "column_2", "column_3"])
>>>
>>> # 4) Load data from CSV file
>>> import pandas as pd
>>> df = pd.read_csv(filepath_or_buffer=csv_dataset_dir[0])
>>> dataset = ds.NumpySlicesDataset(data=dict(df), shuffle=False)
class tinyms.data.PaddedDataset(padded_samples)[源代码]

Creates a dataset with filler data provided by user. Mainly used to add to the original data set and assign it to the corresponding shard.

参数

padded_samples (list(dict)) – Samples provided by user.

引发
  • TypeError – If padded_samples is not an instance of list.

  • TypeError – If the element of padded_samples is not an instance of dict.

  • ValueError – If the padded_samples is empty.

实际案例

>>> import numpy as np
>>> data = [{'image': np.zeros(1, np.uint8)}, {'image': np.zeros(2, np.uint8)}]
>>> dataset = ds.PaddedDataset(padded_samples=data)
class tinyms.data.TextFileDataset(dataset_files, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset that reads and parses datasets stored on disk in text format. The generated dataset has one column [‘text’].

参数
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • num_samples (int, optional) – Number of samples (rows) to read (default=None, reads the full dataset).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

实际案例

>>> text_file_dataset_dir = ["/path/to/text_file_dataset_file"] # contains 1 or multiple text files
>>> dataset = ds.TextFileDataset(dataset_files=text_file_dataset_dir)
class tinyms.data.TFRecordDataset(dataset_files, schema=None, columns_list=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, shard_equal_rows=False, cache=None)[源代码]

A source dataset for reading and parsing datasets stored on disk in TFData format.

参数
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • schema (Union[str, Schema], optional) – Path to the JSON schema file or schema object (default=None). If the schema is not provided, the meta data from the TFData file is considered the schema.

  • columns_list (list[str], optional) – List of columns to be read (default=None, read all columns)

  • num_samples (int, optional) – Number of samples (rows) to read (default=None). If num_samples is None and numRows(parsed from schema) does not exist, read the full dataset; If num_samples is None and numRows(parsed from schema) is greater than 0, read numRows rows; If both num_samples and numRows(parsed from schema) are greater than 0, read num_samples rows.

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • shard_equal_rows (bool, optional) – Get equal rows for all shards(default=False). If shard_equal_rows is false, number of rows of each shard may be not equal. This argument should only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

实际案例

>>> import mindspore.common.dtype as mstype
>>>
>>> tfrecord_dataset_dir = ["/path/to/tfrecord_dataset_file"] # contains 1 or multiple TFRecord files
>>> tfrecord_schema_file = "/path/to/tfrecord_schema_file"
>>>
>>> # 1) Get all rows from tfrecord_dataset_dir with no explicit schema.
>>> # The meta-data in the first row will be used as a schema.
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir)
>>>
>>> # 2) Get all rows from tfrecord_dataset_dir with user-defined schema.
>>> schema = ds.Schema()
>>> schema.add_column(name='col_1d', de_type=mstype.int64, shape=[2])
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir, schema=schema)
>>>
>>> # 3) Get all rows from tfrecord_dataset_dir with schema file.
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir, schema=tfrecord_schema_file)
class tinyms.data.VOCDataset(dataset_dir, task='Segmentation', usage='train', class_indexing=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[源代码]

A source dataset for reading and parsing VOC dataset.

The generated dataset has multiple columns :

  • task=’Detection’, column: [[‘image’, dtype=uint8], [‘bbox’, dtype=float32], [‘label’, dtype=uint32], [‘difficult’, dtype=uint32], [‘truncate’, dtype=uint32]].

  • task=’Segmentation’, column: [[‘image’, dtype=uint8], [‘target’,dtype=uint8]].

This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Citation of VOC dataset.

@article{Everingham10,
author       = {Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.},
title        = {The Pascal Visual Object Classes (VOC) Challenge},
journal      = {International Journal of Computer Vision},
volume       = {88},
year         = {2010},
number       = {2},
month        = {jun},
pages        = {303--338},
biburl       = {http://host.robots.ox.ac.uk/pascal/VOC/pubs/everingham10.html#bibtex},
howpublished = {http://host.robots.ox.ac.uk/pascal/VOC/voc{year}/index.html},
description  = {The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual
                object category recognition and detection, providing the vision and machine
                learning communities with a standard dataset of images and annotation, and
                standard evaluation procedures.}
}
参数
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • task (str) – Set the task type of reading voc data, now only support “Segmentation” or “Detection” (default=”Segmentation”).

  • usage (str) – The type of data list text file to be read (default=”train”).

  • class_indexing (dict, optional) – A str-to-int mapping from label name to index, only valid in “Detection” task (default=None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0).

  • num_samples (int, optional) – The number of images to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of workers to read the data (default=None, number set in the config).

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset (default=None, expected order behavior shown in the table).

  • decode (bool, optional) – Decode the images after reading (default=False).

  • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None, which means no cache is used).

引发
  • RuntimeError – If xml of Annotations is an invalid format.

  • RuntimeError – If xml of Annotations loss attribution of “object”.

  • RuntimeError – If xml of Annotations loss attribution of “bndbox”.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If task is not equal ‘Segmentation’ or ‘Detection’.

  • ValueError – If task equal ‘Segmentation’ but class_indexing is not None.

  • ValueError – If txt related to mode is not exist.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards).

实际案例

>>> voc_dataset_dir = "/path/to/voc_dataset_directory"
>>>
>>> # 1) Read VOC data for segmentatation training
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Segmentation", usage="train")
>>>
>>> # 2) Read VOC data for detection training
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train")
>>>
>>> # 3) Read all VOC dataset samples in voc_dataset_dir with 8 threads in random order
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train",
...                         num_parallel_workers=8)
>>>
>>> # 4) Read then decode all VOC dataset samples in voc_dataset_dir in sequence
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train",
...                         decode=True, shuffle=False)
>>>
>>> # In VOC dataset, if task='Segmentation', each dictionary has keys "image" and "target"
>>> # In VOC dataset, if task='Detection', each dictionary has keys "image" and "annotation"
get_class_indexing()[源代码]

Get the class index.

返回

dict, a str-to-int mapping from label name to index.

class tinyms.data.DistributedSampler(dataset_size, num_replicas=None, rank=None, shuffle=True)[源代码]

Distributed sampler.

参数
  • dataset_size (int) – Dataset list length

  • num_replicas (int) – Replicas num.

  • rank (int) – Device rank.

  • shuffle (bool) – Whether the dataset needs to be shuffled. Default: True.

返回

DistributedSampler instance.

class tinyms.data.PKSampler(num_val, num_class=None, shuffle=False, class_column='label', num_samples=None)[源代码]

Samples K elements for each P class in the dataset.

参数
  • num_val (int) – Number of elements to sample for each class.

  • num_class (int, optional) – Number of classes to sample (default=None, all classes). The parameter does not supported to specify currently.

  • shuffle (bool, optional) – If True, the class IDs are shuffled (default=False).

  • class_column (str, optional) – Name of column with class labels for MindDataset (default=’label’).

  • num_samples (int, optional) – The number of samples to draw (default=None, all elements).

实际案例

>>> # creates a PKSampler that will get 3 samples from every class.
>>> sampler = ds.PKSampler(3)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
引发
class tinyms.data.RandomSampler(replacement=False, num_samples=None)[源代码]

Samples the elements randomly.

参数
  • replacement (bool, optional) – If True, put the sample ID back for the next draw (default=False).

  • num_samples (int, optional) – Number of elements to sample (default=None, all elements).

实际案例

>>> # creates a RandomSampler
>>> sampler = ds.RandomSampler()
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
引发
  • TypeError – If replacement is not a boolean value.

  • TypeError – If num_samples is not an integer value.

  • RuntimeError – If num_samples is a negative value.

class tinyms.data.SequentialSampler(start_index=None, num_samples=None)[源代码]

Samples the dataset elements sequentially, same as not having a sampler.

参数
  • start_index (int, optional) – Index to start sampling at. (default=None, start at first ID)

  • num_samples (int, optional) – Number of elements to sample (default=None, all elements).

实际案例

>>> # creates a SequentialSampler
>>> sampler = ds.SequentialSampler()
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
引发
  • TypeError – If start_index is not an integer value.

  • TypeError – If num_samples is not an integer value.

  • RuntimeError – If start_index is a negative value.

  • RuntimeError – If num_samples is a negative value.

class tinyms.data.SubsetRandomSampler(indices, num_samples=None)[源代码]

Samples the elements randomly from a sequence of indices.

参数
  • indices (Any iterable python object but string) – A sequence of indices.

  • num_samples (int, optional) – Number of elements to sample (default=None, all elements).

实际案例

>>> indices = [0, 1, 2, 3, 7, 88, 119]
>>>
>>> # create a SubsetRandomSampler, will sample from the provided indices
>>> sampler = ds.SubsetRandomSampler(indices)
>>> data = ds.ImageFolderDataset(image_folder_dataset_dir, num_parallel_workers=8, sampler=sampler)
引发
  • TypeError – If type of indices element is not a number.

  • TypeError – If num_samples is not an integer value.

  • RuntimeError – If num_samples is a negative value.

class tinyms.data.WeightedRandomSampler(weights, num_samples=None, replacement=True)[源代码]

Samples the elements from [0, len(weights) - 1] randomly with the given weights (probabilities).

参数
  • weights (list[float, int]) – A sequence of weights, not necessarily summing up to 1.

  • num_samples (int, optional) – Number of elements to sample (default=None, all elements).

  • replacement (bool) – If True, put the sample ID back for the next draw (default=True).

实际案例

>>> weights = [0.9, 0.01, 0.4, 0.8, 0.1, 0.1, 0.3]
>>>
>>> # creates a WeightedRandomSampler that will sample 4 elements without replacement
>>> sampler = ds.WeightedRandomSampler(weights, 4)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
引发
  • TypeError – If type of weights element is not a number.

  • TypeError – If num_samples is not an integer value.

  • TypeError – If replacement is not a boolean value.

  • RuntimeError – If weights is empty or all zero.

  • RuntimeError – If num_samples is a negative value.

class tinyms.data.SubsetSampler(indices, num_samples=None)[源代码]

Samples the elements from a sequence of indices.

参数
  • indices (Any iterable python object but string) – A sequence of indices.

  • num_samples (int, optional) – Number of elements to sample (default=None, all elements).

实际案例

>>> indices = [0, 1, 2, 3, 4, 5]
>>>
>>> # creates a SubsetSampler, will sample from the provided indices
>>> sampler = ds.SubsetSampler(indices)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
引发
  • TypeError – If type of indices element is not a number.

  • TypeError – If num_samples is not an integer value.

  • RuntimeError – If num_samples is a negative value.

class tinyms.data.DatasetCache(session_id, size=0, spilling=False, hostname=None, port=None, num_connections=None, prefetch_size=None)[源代码]

A client to interface with tensor caching service.

For details, please check Chinese tutorial, Chinese programming guide.

参数
  • session_id (int) – A user assigned session id for the current pipeline.

  • size (int, optional) – Size of the memory set aside for the row caching (default=0 which means unlimited, note that it might bring in the risk of running out of memory on the machine).

  • spilling (bool, optional) – Whether or not spilling to disk if out of memory (default=False).

  • hostname (str, optional) – Host name (default=”127.0.0.1”).

  • port (int, optional) – Port to connect to server (default=50052).

  • num_connections (int, optional) – Number of tcp/ip connections (default=12).

  • prefetch_size (int, optional) – Prefetch size (default=20).

实际案例

>>> import mindspore.dataset as ds
>>>
>>> # create a cache instance, in which session_id is generated from command line `cache_admin -g`
>>> some_cache = ds.DatasetCache(session_id=session_id, size=0)
>>>
>>> dataset_dir = "path/to/imagefolder_directory"
>>> ds1 = ds.ImageFolderDataset(dataset_dir, cache=some_cache)
class tinyms.data.Schema(schema_file=None)[源代码]

Class to represent a schema of a dataset.

参数

schema_file (str) – Path of schema file (default=None).

返回

Schema object, schema info about dataset.

引发

RuntimeError – If schema file failed to load.

示例

>>> import mindspore.common.dtype as mstype
>>>
>>> # Create schema; specify column name, mindspore.dtype and shape of the column
>>> schema = ds.Schema()
>>> schema.add_column(name='col1', de_type=mstype.int64, shape=[2])
add_column(name, de_type, shape=None)[源代码]

Add new column to the schema.

参数
  • name (str) – Name of the column.

  • de_type (str) – Data type of the column.

  • shape (list[int], optional) – Shape of the column (default=None, [-1] which is an unknown shape of rank 1).

引发

ValueError – If column type is unknown.

from_json(json_obj)[源代码]

Get schema file from JSON object.

参数

json_obj (dictionary) – Object of JSON parsed.

引发
parse_columns(columns)[源代码]

Parse the columns and add it to self.

参数

columns (Union[dict, list[dict], tuple[dict]]) –

Dataset attribute information, decoded from schema file.

  • list[dict], ‘name’ and ‘type’ must be in keys, ‘shape’ optional.

  • dict, columns.keys() as name, columns.values() is dict, and ‘type’ inside, ‘shape’ optional.

引发

示例

>>> schema = Schema()
>>> columns1 = [{'name': 'image', 'type': 'int8', 'shape': [3, 3]},
>>>             {'name': 'label', 'type': 'int8', 'shape': [1]}]
>>> schema.parse_columns(columns1)
>>> columns2 = {'image': {'shape': [3, 3], 'type': 'int8'}, 'label': {'shape': [1], 'type': 'int8'}}
>>> schema.parse_columns(columns2)
to_json()[源代码]

Get a JSON string of the schema.

返回

str, JSON string of the schema.

tinyms.data.zip(datasets)[源代码]

Zip the datasets in the input tuple of datasets.

参数

datasets (tuple of class Dataset) – A tuple of datasets to be zipped together. The number of datasets must be more than 1.

返回

ZipDataset, dataset zipped.

引发

实际案例

>>> # Create a dataset which is the combination of dataset_1 and dataset_2
>>> dataset = ds.zip((dataset_1, dataset_2))
class tinyms.data.FileWriter(file_name, shard_num=1)[源代码]

Class to write user defined raw data into MindRecord File series.

注解

The mindrecord file may fail to be read if the file name is modified.

参数
  • file_name (str) – File name of MindRecord File.

  • shard_num (int, optional) – The Number of MindRecord File (default=1). It should be between [1, 1000].

引发

ParamValueError – If file_name or shard_num is invalid.

add_index(index_fields)[源代码]

Select index fields from schema to accelerate reading.

参数

index_fields (list[str]) – Fields would be set as index which should be primitive type.

返回

MSRStatus, SUCCESS or FAILED.

引发
  • ParamTypeError – If index field is invalid.

  • MRMDefineIndexError – If index field is not primitive type.

  • MRMAddIndexError – If failed to add index field.

  • MRMGetMetaError – If the schema is not set or failed to get meta.

add_schema(content, desc=None)[源代码]

Return a schema id if schema is added successfully, or raise an exception.

参数
  • content (dict) – Dictionary of user defined schema.

  • desc (str, optional) – String of schema description (default=None).

返回

int, schema id.

引发
  • MRMInvalidSchemaError – If schema is invalid.

  • MRMBuildSchemaError – If failed to build schema.

  • MRMAddSchemaError – If failed to add schema.

commit()[源代码]

Flush data to disk and generate the corresponding database files.

返回

MSRStatus, SUCCESS or FAILED.

引发
  • MRMOpenError – If failed to open MindRecord File.

  • MRMSetHeaderError – If failed to set header.

  • MRMIndexGeneratorError – If failed to create index generator.

  • MRMGenerateIndexError – If failed to write to database.

  • MRMCommitError – If failed to flush data to disk.

open_and_set_header()[源代码]

Open writer and set header.

classmethod open_for_append(file_name)[源代码]

Open MindRecord file and get ready to append data.

参数

file_name (str) – String of MindRecord file name.

返回

FileWriter, file writer for the opened MindRecord file.

引发
  • ParamValueError – If file_name is invalid.

  • FileNameError – If path contains invalid characters.

  • MRMOpenError – If failed to open MindRecord File.

  • MRMOpenForAppendError – If failed to open file for appending data.

set_header_size(header_size)[源代码]

Set the size of header which contains shard information, schema information, page meta information, etc. The larger the header, the more training data a single Mindrecord file can store.

参数

header_size (int) – Size of header, between 16KB and 128MB.

返回

MSRStatus, SUCCESS or FAILED.

引发

MRMInvalidHeaderSizeError – If failed to set header size.

set_page_size(page_size)[源代码]

Set the size of page which mainly refers to the block to store training data, and the training data will be split into raw page and blob page in mindrecord. The larger the page, the more training data a single page can store.

参数

page_size (int) – Size of page, between 32KB and 256MB.

返回

MSRStatus, SUCCESS or FAILED.

引发

MRMInvalidPageSizeError – If failed to set page size.

write_raw_data(raw_data, parallel_writer=False)[源代码]

Write raw data and generate sequential pair of MindRecord File and validate data based on predefined schema by default.

参数
  • raw_data (list[dict]) – List of raw data.

  • parallel_writer (bool, optional) – Load data parallel if it equals to True (default=False).

返回

MSRStatus, SUCCESS or FAILED.

引发
  • ParamTypeError – If index field is invalid.

  • MRMOpenError – If failed to open MindRecord File.

  • MRMValidateDataError – If data does not match blob fields.

  • MRMSetHeaderError – If failed to set header.

  • MRMWriteDatasetError – If failed to write dataset.

class tinyms.data.FileReader(file_name, num_consumer=4, columns=None, operator=None)[源代码]

Class to read MindRecord File series.

参数
  • file_name (str, list[str]) – One of MindRecord File or a file list.

  • num_consumer (int, optional) – Number of consumer threads which load data to memory (default=4). It should not be smaller than 1 or larger than the number of CPUs.

  • columns (list[str], optional) – A list of fields where corresponding data would be read (default=None).

  • operator (int, optional) – Reserved parameter for operators (default=None).

引发

ParamValueError – If file_name, num_consumer or columns is invalid.

close()[源代码]

Stop reader worker and close File.

get_next()[源代码]

Yield a batch of data according to columns at a time.

生成器

dictionary – keys are the same as columns.

引发

MRMUnsupportedSchemaError – If schema is invalid.

class tinyms.data.MindPage(file_name, num_consumer=4)[源代码]

Class to read MindRecord File series in pagination.

参数
  • file_name (str) – One of MindRecord File or a file list.

  • num_consumer (int, optional) – The number of consumer threads which load data to memory (default=4). It should not be smaller than 1 or larger than the number of CPUs.

引发
  • ParamValueError – If file_name, num_consumer or columns is invalid.

  • MRMInitSegmentError – If failed to initialize ShardSegment.

property candidate_fields

Return candidate category fields.

返回

list[str], by which data could be grouped.

property category_field

Getter function for category fields.

返回

list[str], by which data could be grouped.

get_category_fields()[源代码]

Return candidate category fields.

返回

list[str], by which data could be grouped.

read_at_page_by_id(category_id, page, num_row)[源代码]

Query by category id in pagination.

参数
  • category_id (int) – Category id, referred to the return of read_category_info.

  • page (int) – Index of page.

  • num_row (int) – Number of rows in a page.

返回

list[dict], data queried by category id.

引发
  • ParamValueError – If any parameter is invalid.

  • MRMFetchDataError – If failed to fetch data by category.

  • MRMUnsupportedSchemaError – If schema is invalid.

read_at_page_by_name(category_name, page, num_row)[源代码]

Query by category name in pagination.

参数
  • category_name (str) – String of category field’s value, referred to the return of read_category_info.

  • page (int) – Index of page.

  • num_row (int) – Number of row in a page.

返回

list[dict], data queried by category name.

read_category_info()[源代码]

Return category information when data is grouped by indicated category field.

返回

str, description of group information.

引发

MRMReadCategoryInfoError – If failed to read category information.

set_category_field(category_field)[源代码]

Set category field for reading.

注解

Should be a candidate category field.

参数

category_field (str) – String of category field name.

返回

MSRStatus, SUCCESS or FAILED.

class tinyms.data.Cifar10ToMR(source, destination)[源代码]

A class to transform from cifar10 to MindRecord.

参数
  • source (str) – the cifar10 directory to be transformed.

  • destination (str) – the MindRecord file path to transform into.

引发

ValueError – If source or destination is invalid.

run(fields=None)[源代码]

Execute transformation from cifar10 to MindRecord.

参数

fields (list[str], optional) – A list of index fields, e.g.[“label”] (default=None).

返回

MSRStatus, whether cifar10 is successfully transformed to MindRecord.

class tinyms.data.Cifar100ToMR(source, destination)[源代码]

A class to transform from cifar100 to MindRecord.

参数
  • source (str) – the cifar100 directory to be transformed.

  • destination (str) – the MindRecord file path to transform into.

引发

ValueError – If source or destination is invalid.

run(fields=None)[源代码]

Execute transformation from cifar100 to MindRecord.

参数

fields (list[str]) – A list of index field, e.g.[“fine_label”, “coarse_label”].

返回

MSRStatus, whether cifar100 is successfully transformed to MindRecord.

class tinyms.data.CsvToMR(source, destination, columns_list=None, partition_number=1)[源代码]

A class to transform from csv to MindRecord.

参数
  • source (str) – the file path of csv.

  • destination (str) – the MindRecord file path to transform into.

  • columns_list (list[str], optional) – A list of columns to be read(default=None).

  • partition_number (int, optional) – partition size (default=1).

引发
  • ValueError – If source, destination, partition_number is invalid.

  • RuntimeError – If columns_list is invalid.

run()[源代码]

Execute transformation from csv to MindRecord.

返回

MSRStatus, whether csv is successfully transformed to MindRecord.

class tinyms.data.ImageNetToMR(map_file, image_dir, destination, partition_number=1)[源代码]

A class to transform from imagenet to MindRecord.

参数
  • map_file (str) –

    the map file that indicates label. The map file content should be like this:

    n02119789 0
    n02100735 1
    n02110185 2
    n02096294 3
    

  • image_dir (str) – image directory contains n02119789, n02100735, n02110185 and n02096294 directory.

  • destination (str) – the MindRecord file path to transform into.

  • partition_number (int, optional) – partition size (default=1).

引发

ValueError – If map_file, image_dir or destination is invalid.

run()[源代码]

Execute transformation from imagenet to MindRecord.

返回

MSRStatus, whether imagenet is successfully transformed to MindRecord.

class tinyms.data.MnistToMR(source, destination, partition_number=1)[源代码]

A class to transform from Mnist to MindRecord.

参数
  • source (str) – directory that contains t10k-images-idx3-ubyte.gz, train-images-idx3-ubyte.gz, t10k-labels-idx1-ubyte.gz and train-labels-idx1-ubyte.gz.

  • destination (str) – the MindRecord file directory to transform into.

  • partition_number (int, optional) – partition size (default=1).

引发

ValueError – If source, destination, partition_number is invalid.

run()[源代码]

Execute transformation from Mnist to MindRecord.

返回

MSRStatus, whether successfully written into MindRecord.

class tinyms.data.TFRecordToMR(source, destination, feature_dict, bytes_fields=None)[源代码]

A class to transform from TFRecord to MindRecord.

参数
  • source (str) – the TFRecord file to be transformed.

  • destination (str) – the MindRecord file path to transform into.

  • feature_dict (dict) –

    a dictionary that states the feature type, e.g. feature_dict = {“xxxx”: tf.io.FixedLenFeature([], tf.string), “yyyy”: tf.io.FixedLenFeature([], tf.int64)}

    Follow case which uses VarLenFeature is not supported.

    feature_dict = {“context”: {“xxxx”: tf.io.FixedLenFeature([], tf.string), “yyyy”: tf.io.VarLenFeature(tf.int64)}, “sequence”: {“zzzz”: tf.io.FixedLenSequenceFeature([], tf.float32)}}

  • bytes_fields (list, optional) – the bytes fields which are in feature_dict and can be images bytes.

引发
  • ValueError – If parameter is invalid.

  • Exception – when tensorflow module is not found or version is not correct.

run()[源代码]

Execute transformation from TFRecord to MindRecord.

返回

MSRStatus, whether TFRecord is successfully transformed to MindRecord.

tfrecord_iterator()[源代码]

Yield a dictionary whose keys are fields in schema.

生成器

dict, data dictionary whose keys are the same as columns.

tfrecord_iterator_oldversion()[源代码]

Yield a dict with key to be fields in schema, and value to be data. This function is for old version tensorflow whose version number < 2.1.0

生成器

dict, data dictionary whose keys are the same as columns.

tinyms.data.download_dataset(dataset_name, local_path='.')[源代码]

This function is defined to easily download any public dataset without specifing much details.

参数
  • dataset_name (str) – The official name of dataset, currently supports mnist, cifar10 and cifar100.

  • local_path (str) – Specifies the local location of dataset to be downloaded. Default: ..

返回

str, the source location of dataset downloaded.

实际案例

>>> from tinyms.data import download_dataset
>>>
>>> ds_path = download_dataset('mnist')
tinyms.data.generate_image_list(dir_path, max_dataset_size=inf)[源代码]

Traverse the directory to generate a list of images path.

参数
  • dir_path (str) – image directory.

  • max_dataset_size (int) – Maximum number of return image paths.

返回

Image path list.

tinyms.data.load_resized_img(path, width=256, height=256)[源代码]

Load image with RGB and resize to (256, 256).

参数
  • path (str) – image path.

  • width (int) – image width, default: 256.

  • height (int) – image height, default: 256.

返回

PIL image class.

tinyms.data.load_img(path)[源代码]

Load image with RGB.

参数

path (str) – image path.

返回

PIL image class.