Skip to content

Commit

Permalink
small fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
jaquesgrobler committed Jan 15, 2014
1 parent 1dcab45 commit 11ec836
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions doc/tutorial/text_analytics/working_with_text_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -537,31 +537,31 @@ Bonus point if the utility is able to give a confidence level for its
predictions.


Where to go from Here
Where to from here
------------------

Here are a few suggestions to help further your scikit-learn intuition
upon the completion of this tutorial:


- Try playing around with the `analyzer` and `token normalisation` under
* Try playing around with the ``analyzer`` and ``token normalisation`` under
:class:`CountVectorizer`

- If you don't have labels, try using
* If you don't have labels, try using
:ref:`Clustering <example_document_clustering.py>`
on your problem.

- If you have multiple labels per document, e.g categories, have a look
* If you have multiple labels per document, e.g categories, have a look
at the :ref:`Multiclass and multilabel section <multiclass>`

- Try using :ref:`Truncated SVD <LSA>` for
* Try using :ref:`Truncated SVD <LSA>` for
`latent semantic analysis <http://en.wikipedia.org/wiki/Latent_semantic_analysis>`_.

- Have a look at using
* Have a look at using
:ref:`Out-of-core Classification
<example_applications_plot_out_of_core_classification.py>` to
learn from data that would not fit into the computer main memory.

- Have a look at the :ref:`Hashing Vectorizer <hashing_vectorizer>`
* Have a look at the :ref:`Hashing Vectorizer <hashing_vectorizer>`
as a memory efficient alternative to :class:`CountVectorizer`.

0 comments on commit 11ec836

Please sign in to comment.