Skip to content

Commit

Permalink
update doc 0.6.4
Browse files Browse the repository at this point in the history
  • Loading branch information
twmht committed Apr 16, 2017
1 parent 9d87eb5 commit e834f80
Show file tree
Hide file tree
Showing 4 changed files with 58 additions and 73 deletions.
92 changes: 31 additions & 61 deletions docs/api/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,47 +215,14 @@ Options object
| *Type:* ``[int]``
| *Default:* ``[1, 1, 1, 1, 1, 1, 1]``
.. py:attribute:: expanded_compaction_factor
.. py:attribute:: max_compaction_bytes
Maximum number of bytes in all compacted files. We avoid expanding
the lower level file set of a compaction if it would make the
total compaction cover more than
(expanded_compaction_factor * targetFileSizeLevel()) many bytes.
We try to limit number of bytes in one compaction to be lower than this
threshold. But it's not guaranteed.
Value 0 will be sanitized.

| *Type:* ``int``
| *Default:* ``25``
.. py:attribute:: source_compaction_factor
Maximum number of bytes in all source files to be compacted in a
single compaction run. We avoid picking too many files in the
source level so that we do not exceed the total source bytes
for compaction to exceed
(source_compaction_factor * targetFileSizeLevel()) many bytes.
If 1 pick maxfilesize amount of data as the source of
a compaction.

| *Type:* ``int``
| *Default:* ``1``
.. py:attribute:: max_grandparent_overlap_factor
Control maximum bytes of overlaps in grandparent (i.e., level+2) before we
stop building a single file in a level->level+1 compaction.

| *Type:* ``int``
| *Default:* ``10``
.. py:attribute:: disable_data_sync
If true, then the contents of data files are not synced
to stable storage. Their contents remain in the OS buffers till the
OS decides to flush them. This option is good for bulk-loading
of data. Once the bulk-loading is complete, please issue a
sync to the OS to flush all dirty buffesrs to stable storage.

| *Type:* ``bool``
| *Default:* ``False``
| *Default:* ``target_file_size_base * 25``
.. py:attribute:: use_fsync
Expand Down Expand Up @@ -447,12 +414,6 @@ Options object
| *Type:* ``bool``
| *Default:* ``True``
.. py:attribute:: allow_os_buffer
Data being read from file storage may be buffered in the OS

| *Type:* ``bool``
| *Default:* ``True``

.. py:attribute:: allow_mmap_reads
Expand Down Expand Up @@ -517,22 +478,24 @@ Options object
| *Type:* ``int``
| *Default:* ``0``
.. py:attribute:: verify_checksums_in_compaction
If ``True``, compaction will verify checksum on every read that
happens as part of compaction.

| *Type:* ``bool``
| *Default:* ``True``

.. py:attribute:: compaction_style
The compaction style. Could be set to ``"level"`` to use level-style
compaction. For universal-style compaction use ``"universal"``.
compaction. For universal-style compaction use ``"universal"``. For
FIFO compaction use ``"fifo"``. If no compaction style use ``"none"``.

| *Type:* ``string``
| *Default:* ``level``
.. py:attribute:: compaction_pri
If level compaction_style = kCompactionStyleLevel, for each level,
which files are prioritized to be picked to compact.

| *Type:* Member of :py:class:`rocksdb.CompactionPri`
| *Default:* :py:attr:`rocksdb.CompactionPri.kByCompensatedSize`
.. py:attribute:: compaction_options_universal
Options to use for universal-style compaction. They make only sense if
Expand Down Expand Up @@ -603,15 +566,6 @@ Options object
opts = rocksdb.Options()
opts.compaction_options_universal = {'stop_style': 'similar_size'}

.. py:attribute:: filter_deletes
Use KeyMayExist API to filter deletes when this is true.
If KeyMayExist returns false, i.e. the key definitely does not exist, then
the delete is a noop. KeyMayExist only incurs in-memory look up.
This optimization avoids writing the delete to storage when appropriate.

| *Type:* ``bool``
| *Default:* ``False``

.. py:attribute:: max_sequential_skip_in_iterations
Expand Down Expand Up @@ -726,6 +680,18 @@ Options object
*Default:* ``None``


CompactionPri
================

.. py:class:: rocksdb.CompactionPri
Defines the support compression types

.. py:attribute:: kByCompensatedSize
.. py:attribute:: kOldestLargestSeqFirst
.. py:attribute:: kOldestSmallestSeqFirst
.. py:attribute:: kMinOverlappingRatio
CompressionTypes
================

Expand All @@ -739,6 +705,10 @@ CompressionTypes
.. py:attribute:: bzip2_compression
.. py:attribute:: lz4_compression
.. py:attribute:: lz4hc_compression
.. py:attribute:: xpress_compression
.. py:attribute:: zstd_compression
.. py:attribute:: zstdnotfinal_compression
.. py:attribute:: disable_compression
BytewiseComparator
==================
Expand Down
18 changes: 9 additions & 9 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
#
# pyrocksdb documentation build configuration file, created by
# python-rocksdb documentation build configuration file, created by
# sphinx-quickstart on Tue Dec 31 12:50:54 2013.
#
# This file is execfile()d with the current directory set to its
Expand Down Expand Up @@ -47,17 +47,17 @@
master_doc = 'index'

# General information about the project.
project = u'pyrocksdb'
project = u'python-rocksdb'
copyright = u'2014, sh'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.4'
version = '0.6'
# The full version, including alpha/beta/rc tags.
release = '0.4'
release = '0.6.4'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down Expand Up @@ -180,7 +180,7 @@
#html_file_suffix = None

# Output file base name for HTML help builder.
htmlhelp_basename = 'pyrocksdbdoc'
htmlhelp_basename = 'python-rocksdbdoc'


# -- Options for LaTeX output ---------------------------------------------
Expand All @@ -200,7 +200,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'pyrocksdb.tex', u'pyrocksdb Documentation',
('index', 'python-rocksdb.tex', u'python-rocksdb Documentation',
u'sh', 'manual'),
]

Expand Down Expand Up @@ -230,7 +230,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'pyrocksdb', u'pyrocksdb Documentation',
('index', 'python-rocksdb', u'python-rocksdb Documentation',
[u'sh'], 1)
]

Expand All @@ -244,8 +244,8 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'pyrocksdb', u'pyrocksdb Documentation',
u'sh', 'pyrocksdb', 'One line description of project.',
('index', 'python-rocksdb', u'python-rocksdb Documentation',
u'sh', 'python-rocksdb', 'One line description of project.',
'Miscellaneous'),
]

Expand Down
4 changes: 2 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Welcome to pyrocksdb's documentation!
Welcome to python-rocksdb's documentation!
=====================================

Overview
Expand All @@ -11,7 +11,7 @@ Python bindings to the C++ interface of http://rocksdb.org/ using cython::
print db.get(b"a")


Tested with python2.7 and python3.4 and RocksDB version 3.12
Tested with python2.7 and python3.4 and RocksDB version 5.3.0

.. toctree::
:maxdepth: 2
Expand Down
17 changes: 16 additions & 1 deletion docs/tutorial/index.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Basic Usage of pyrocksdb
Basic Usage of python-rocksdb
************************

Open
Expand Down Expand Up @@ -197,6 +197,21 @@ The following example python merge operator implements a counter ::
# prints b'2'
print db.get(b"a")

We provide a set of default operators ``uintadd64``, ``put`` and ``stringappend``

The following example using ``uintadd64`` where each operand is ``uint64`` ::

import rocksdb
import struct
opts = rocksdb.Options()
opts.create_if_missing = True
opts.merge_operator = 'uint64add'
db = rocksdb.DB("test.db", opts)
# since every operand is uint64, you need to pack it into string
db.put(b'a', struct.pack('Q', 1000))
db.merge(b'a', struct.pack('Q', 2000))
assert struct.unpack('Q', db.get(b'a'))[0] == 3000

PrefixExtractor
===============

Expand Down

0 comments on commit e834f80

Please sign in to comment.