diff options
author | wiz <wiz@pkgsrc.org> | 2012-06-03 21:29:57 +0000 |
---|---|---|
committer | wiz <wiz@pkgsrc.org> | 2012-06-03 21:29:57 +0000 |
commit | e5f064e5d3bf52e4b8a9d321afd25cdb95d2eeba (patch) | |
tree | 0997955d324b6d33c01e4eef1ea1c4e1bcb921f9 /www | |
parent | 4a2d0697c806bc675913401103d8a636e660491f (diff) | |
download | pkgsrc-e5f064e5d3bf52e4b8a9d321afd25cdb95d2eeba.tar.gz |
Initial import of py-beautifulsoup4, a rewrite of py-beautifulsoup.
Changes compared to version 3 (in py-beautifulsoup):
= 4.1.0 (20120529) =
* Added experimental support for fixing Windows-1252 characters
embedded in UTF-8 documents. (UnicodeDammit.detwingle())
* Fixed the handling of " with the built-in parser. [bug=993871]
* Comments, processing instructions, document type declarations, and
markup declarations are now treated as preformatted strings, the way
CData blocks are. [bug=1001025]
* Fixed a bug with the lxml treebuilder that prevented the user from
adding attributes to a tag that didn't originally have
attributes. [bug=1002378] Thanks to Oliver Beattie for the patch.
* Fixed some edge-case bugs having to do with inserting an element
into a tag it's already inside, and replacing one of a tag's
children with another. [bug=997529]
* Added the ability to search for attribute values specified in UTF-8. [bug=1003974]
This caused a major refactoring of the search code. All the tests
pass, but it's possible that some searches will behave differently.
= 4.0.5 (20120427) =
* Added a new method, wrap(), which wraps an element in a tag.
* Renamed replace_with_children() to unwrap(), which is easier to
understand and also the jQuery name of the function.
* Made encoding substitution in <meta> tags completely transparent (no
more %SOUP-ENCODING%).
* Fixed a bug in decoding data that contained a byte-order mark, such
as data encoded in UTF-16LE. [bug=988980]
* Fixed a bug that made the HTMLParser treebuilder generate XML
definitions ending with two question marks instead of
one. [bug=984258]
* Upon document generation, CData objects are no longer run through
the formatter. [bug=988905]
* The test suite now passes when lxml is not installed, whether or not
html5lib is installed. [bug=987004]
* Print a warning on HTMLParseErrors to let people know they should
install a better parser library.
= 4.0.4 (20120416) =
* Fixed a bug that sometimes created disconnected trees.
* Fixed a bug with the string setter that moved a string around the
tree instead of copying it. [bug=983050]
* Attribute values are now run through the provided output formatter.
Previously they were always run through the 'minimal' formatter. In
the future I may make it possible to specify different formatters
for attribute values and strings, but for now, consistent behavior
is better than inconsistent behavior. [bug=980237]
* Added the missing renderContents method from Beautiful Soup 3. Also
added an encode_contents() method to go along with decode_contents().
* Give a more useful error when the user tries to run the Python 2
version of BS under Python 3.
* UnicodeDammit can now convert Microsoft smart quotes to ASCII with
UnicodeDammit(markup, smart_quotes_to="ascii").
= 4.0.3 (20120403) =
* Fixed a typo that caused some versions of Python 3 to convert the
Beautiful Soup codebase incorrectly.
* Got rid of the 4.0.2 workaround for HTML documents--it was
unnecessary and the workaround was triggering a (possibly different,
but related) bug in lxml. [bug=972466]
= 4.0.2 (20120326) =
* Worked around a possible bug in lxml that prevents non-tiny XML
documents from being parsed. [bug=963880, bug=963936]
* Fixed a bug where specifying `text` while also searching for a tag
only worked if `text` wanted an exact string match. [bug=955942]
= 4.0.1 (20120314) =
* This is the first official release of Beautiful Soup 4. There is no
4.0.0 release, to eliminate any possibility that packaging software
might treat "4.0.0" as being an earlier version than "4.0.0b10".
* Brought BS up to date with the latest release of soupselect, adding
CSS selector support for direct descendant matches and multiple CSS
class matches.
= 4.0.0b10 (20120302) =
* Added support for simple CSS selectors, taken from the soupselect project.
* Fixed a crash when using html5lib. [bug=943246]
* In HTML5-style <meta charset="foo"> tags, the value of the "charset"
attribute is now replaced with the appropriate encoding on
output. [bug=942714]
* Fixed a bug that caused calling a tag to sometimes call find_all()
with the wrong arguments. [bug=944426]
* For backwards compatibility, brought back the BeautifulStoneSoup
class as a deprecated wrapper around BeautifulSoup.
= 4.0.0b9 (20120228) =
* Fixed the string representation of DOCTYPEs that have both a public
ID and a system ID.
* Fixed the generated XML declaration.
* Renamed Tag.nsprefix to Tag.prefix, for consistency with
NamespacedAttribute.
* Fixed a test failure that occured on Python 3.x when chardet was
installed.
* Made prettify() return Unicode by default, so it will look nice on
Python 3 when passed into print().
= 4.0.0b8 (20120224) =
* All tree builders now preserve namespace information in the
documents they parse. If you use the html5lib parser or lxml's XML
parser, you can access the namespace URL for a tag as tag.namespace.
However, there is no special support for namespace-oriented
searching or tree manipulation. When you search the tree, you need
to use namespace prefixes exactly as they're used in the original
document.
* The string representation of a DOCTYPE always ends in a newline.
* Issue a warning if the user tries to use a SoupStrainer in
conjunction with the html5lib tree builder, which doesn't support
them.
= 4.0.0b7 (20120223) =
* Upon decoding to string, any characters that can't be represented in
your chosen encoding will be converted into numeric XML entity
references.
* Issue a warning if characters were replaced with REPLACEMENT
CHARACTER during Unicode conversion.
* Restored compatibility with Python 2.6.
* The install process no longer installs docs or auxillary text files.
* It's now possible to deepcopy a BeautifulSoup object created with
Python's built-in HTML parser.
* About 100 unit tests that "test" the behavior of various parsers on
invalid markup have been removed. Legitimate changes to those
parsers caused these tests to fail, indicating that perhaps
Beautiful Soup should not test the behavior of foreign
libraries.
The problematic unit tests have been reformulated as informational
comparisons generated by the script
scripts/demonstrate_parser_differences.py.
This makes Beautiful Soup compatible with html5lib version 0.95 and
future versions of HTMLParser.
= 4.0.0b6 (20120216) =
* Multi-valued attributes like "class" always have a list of values,
even if there's only one value in the list.
* Added a number of multi-valued attributes defined in HTML5.
* Stopped generating a space before the slash that closes an
empty-element tag. This may come back if I add a special XHTML mode
(http://www.w3.org/TR/xhtml1/#C_2), but right now it's pretty
useless.
* Passing text along with tag-specific arguments to a find* method:
find("a", text="Click here")
will find tags that contain the given text as their
.string. Previously, the tag-specific arguments were ignored and
only strings were searched.
* Fixed a bug that caused the html5lib tree builder to build a
partially disconnected tree. Generally cleaned up the html5lib tree
builder.
* If you restrict a multi-valued attribute like "class" to a string
that contains spaces, Beautiful Soup will only consider it a match
if the values correspond to that specific string.
= 4.0.0b5 (20120209) =
* Rationalized Beautiful Soup's treatment of CSS class. A tag
belonging to multiple CSS classes is treated as having a list of
values for the 'class' attribute. Searching for a CSS class will
match *any* of the CSS classes.
This actually affects all attributes that the HTML standard defines
as taking multiple values (class, rel, rev, archive, accept-charset,
and headers), but 'class' is by far the most common. [bug=41034]
* If you pass anything other than a dictionary as the second argument
to one of the find* methods, it'll assume you want to use that
object to search against a tag's CSS classes. Previously this only
worked if you passed in a string.
* Fixed a bug that caused a crash when you passed a dictionary as an
attribute value (possibly because you mistyped "attrs"). [bug=842419]
* Unicode, Dammit now detects the encoding in HTML 5-style <meta> tags
like <meta charset="utf-8" />. [bug=837268]
* If Unicode, Dammit can't figure out a consistent encoding for a
page, it will try each of its guesses again, with errors="replace"
instead of errors="strict". This may mean that some data gets
replaced with REPLACEMENT CHARACTER, but at least most of it will
get turned into Unicode. [bug=754903]
* Patched over a bug in html5lib (?) that was crashing Beautiful Soup
on certain kinds of markup. [bug=838800]
* Fixed a bug that wrecked the tree if you replaced an element with an
empty string. [bug=728697]
* Improved Unicode, Dammit's behavior when you give it Unicode to
begin with.
= 4.0.0b4 (20120208) =
* Added BeautifulSoup.new_string() to go along with BeautifulSoup.new_tag()
* BeautifulSoup.new_tag() will follow the rules of whatever
tree-builder was used to create the original BeautifulSoup object. A
new <p> tag will look like "<p />" if the soup object was created to
parse XML, but it will look like "<p></p>" if the soup object was
created to parse HTML.
* We pass in strict=False to html.parser on Python 3, greatly
improving html.parser's ability to handle bad HTML.
* We also monkeypatch a serious bug in html.parser that made
strict=False disastrous on Python 3.2.2.
* Replaced the "substitute_html_entities" argument with the
more general "formatter" argument.
* Bare ampersands and angle brackets are always converted to XML
entities unless the user prevents it.
* Added PageElement.insert_before() and PageElement.insert_after(),
which let you put an element into the parse tree with respect to
some other element.
* Raise an exception when the user tries to do something nonsensical
like insert a tag into itself.
= 4.0.0b3 (20120203) =
Beautiful Soup 4 is a nearly-complete rewrite that removes Beautiful
Soup's custom HTML parser in favor of a system that lets you write a
little glue code and plug in any HTML or XML parser you want.
Beautiful Soup 4.0 comes with glue code for four parsers:
* Python's standard HTMLParser (html.parser in Python 3)
* lxml's HTML and XML parsers
* html5lib's HTML parser
HTMLParser is the default, but I recommend you install lxml if you
can.
For complete documentation, see the Sphinx documentation in
bs4/doc/source/. What follows is a summary of the changes from
Beautiful Soup 3.
=== The module name has changed ===
Previously you imported the BeautifulSoup class from a module also
called BeautifulSoup. To save keystrokes and make it clear which
version of the API is in use, the module is now called 'bs4':
>>> from bs4 import BeautifulSoup
=== It works with Python 3 ===
Beautiful Soup 3.1.0 worked with Python 3, but the parser it used was
so bad that it barely worked at all. Beautiful Soup 4 works with
Python 3, and since its parser is pluggable, you don't sacrifice
quality.
Special thanks to Thomas Kluyver and Ezio Melotti for getting Python 3
support to the finish line. Ezio Melotti is also to thank for greatly
improving the HTML parser that comes with Python 3.2.
=== CDATA sections are normal text, if they're understood at all. ===
Currently, the lxml and html5lib HTML parsers ignore CDATA sections in
markup:
<p><![CDATA[foo]]></p> => <p></p>
A future version of html5lib will turn CDATA sections into text nodes,
but only within tags like <svg> and <math>:
<svg><![CDATA[foo]]></svg> => <p>foo</p>
The default XML parser (which uses lxml behind the scenes) turns CDATA
sections into ordinary text elements:
<p><![CDATA[foo]]></p> => <p>foo</p>
In theory it's possible to preserve the CDATA sections when using the
XML parser, but I don't see how to get it to work in practice.
=== Miscellaneous other stuff ===
If the BeautifulSoup instance has .is_xml set to True, an appropriate
XML declaration will be emitted when the tree is transformed into a
string:
<?xml version="1.0" encoding="utf-8">
<markup>
...
</markup>
The ['lxml', 'xml'] tree builder sets .is_xml to True; the other tree
builders set it to False. If you want to parse XHTML with an HTML
parser, you can set it manually.
Diffstat (limited to 'www')
-rw-r--r-- | www/py-beautifulsoup4/DESCR | 25 | ||||
-rw-r--r-- | www/py-beautifulsoup4/Makefile | 23 | ||||
-rw-r--r-- | www/py-beautifulsoup4/PLIST | 50 | ||||
-rw-r--r-- | www/py-beautifulsoup4/distinfo | 5 |
4 files changed, 103 insertions, 0 deletions
diff --git a/www/py-beautifulsoup4/DESCR b/www/py-beautifulsoup4/DESCR new file mode 100644 index 00000000000..998932a10d0 --- /dev/null +++ b/www/py-beautifulsoup4/DESCR @@ -0,0 +1,25 @@ +Beautiful Soup is a Python library designed for quick turnaround +projects like screen-scraping. Three features make it powerful: + + * Beautiful Soup provides a few simple methods and Pythonic idioms + for navigating, searching, and modifying a parse tree: a toolkit + for dissecting a document and extracting what you need. It doesn't + take much code to write an application + * Beautiful Soup automatically converts incoming documents to + Unicode and outgoing documents to UTF-8. You don't have to think + about encodings, unless the document doesn't specify an encoding + and Beautiful Soup can't autodetect one. Then you just have to + specify the original encoding. + * Beautiful Soup sits on top of popular Python parsers like lxml + and html5lib, allowing you to try out different parsing strategies + or trade speed for flexibility. + +Beautiful Soup parses anything you give it, and does the tree +traversal stuff for you. You can tell it "Find all the links", or +"Find all the links of class externalLink", or "Find all the links +whose urls match "foo.com", or "Find the table heading that's got +bold text, then give me that text." + +Valuable data that was once locked up in poorly-designed websites +is now within your reach. Projects that would have taken hours take +only minutes with Beautiful Soup. diff --git a/www/py-beautifulsoup4/Makefile b/www/py-beautifulsoup4/Makefile new file mode 100644 index 00000000000..239d4c5d0c4 --- /dev/null +++ b/www/py-beautifulsoup4/Makefile @@ -0,0 +1,23 @@ +# $NetBSD: Makefile,v 1.1.1.1 2012/06/03 21:29:57 wiz Exp $ +# + +DISTNAME= beautifulsoup4-4.1.0 +PKGNAME= ${PYPKGPREFIX}-${DISTNAME} +CATEGORIES= www python +MASTER_SITES= http://www.crummy.com/software/BeautifulSoup/bs4/download/4.0/ + +MAINTAINER= pkgsrc-users@NetBSD.org +HOMEPAGE= http://www.crummy.com/software/BeautifulSoup/ +COMMENT= HTML/XML Parser for Python, version 4 +LICENSE= modified-bsd + +DEPENDS+= ${PYPKGPREFIX}-lxml-[0-9]*:../../textproc/py-lxml + +PKG_DESTDIR_SUPPORT= user-destdir +PYTHON_VERSIONS_INCLUDE_3X= yes + +do-test: + cd ${WRKSRC} && ${PYTHONBIN} -m unittest discover -s bs4 + +.include "../../lang/python/distutils.mk" +.include "../../mk/bsd.pkg.mk" diff --git a/www/py-beautifulsoup4/PLIST b/www/py-beautifulsoup4/PLIST new file mode 100644 index 00000000000..28597ba2163 --- /dev/null +++ b/www/py-beautifulsoup4/PLIST @@ -0,0 +1,50 @@ +@comment $NetBSD: PLIST,v 1.1.1.1 2012/06/03 21:29:57 wiz Exp $ +${PYSITELIB}/${EGG_FILE} +${PYSITELIB}/bs4/__init__.py +${PYSITELIB}/bs4/__init__.pyc +${PYSITELIB}/bs4/__init__.pyo +${PYSITELIB}/bs4/builder/__init__.py +${PYSITELIB}/bs4/builder/__init__.pyc +${PYSITELIB}/bs4/builder/__init__.pyo +${PYSITELIB}/bs4/builder/_html5lib.py +${PYSITELIB}/bs4/builder/_html5lib.pyc +${PYSITELIB}/bs4/builder/_html5lib.pyo +${PYSITELIB}/bs4/builder/_htmlparser.py +${PYSITELIB}/bs4/builder/_htmlparser.pyc +${PYSITELIB}/bs4/builder/_htmlparser.pyo +${PYSITELIB}/bs4/builder/_lxml.py +${PYSITELIB}/bs4/builder/_lxml.pyc +${PYSITELIB}/bs4/builder/_lxml.pyo +${PYSITELIB}/bs4/dammit.py +${PYSITELIB}/bs4/dammit.pyc +${PYSITELIB}/bs4/dammit.pyo +${PYSITELIB}/bs4/element.py +${PYSITELIB}/bs4/element.pyc +${PYSITELIB}/bs4/element.pyo +${PYSITELIB}/bs4/testing.py +${PYSITELIB}/bs4/testing.pyc +${PYSITELIB}/bs4/testing.pyo +${PYSITELIB}/bs4/tests/__init__.py +${PYSITELIB}/bs4/tests/__init__.pyc +${PYSITELIB}/bs4/tests/__init__.pyo +${PYSITELIB}/bs4/tests/test_builder_registry.py +${PYSITELIB}/bs4/tests/test_builder_registry.pyc +${PYSITELIB}/bs4/tests/test_builder_registry.pyo +${PYSITELIB}/bs4/tests/test_docs.py +${PYSITELIB}/bs4/tests/test_docs.pyc +${PYSITELIB}/bs4/tests/test_docs.pyo +${PYSITELIB}/bs4/tests/test_html5lib.py +${PYSITELIB}/bs4/tests/test_html5lib.pyc +${PYSITELIB}/bs4/tests/test_html5lib.pyo +${PYSITELIB}/bs4/tests/test_htmlparser.py +${PYSITELIB}/bs4/tests/test_htmlparser.pyc +${PYSITELIB}/bs4/tests/test_htmlparser.pyo +${PYSITELIB}/bs4/tests/test_lxml.py +${PYSITELIB}/bs4/tests/test_lxml.pyc +${PYSITELIB}/bs4/tests/test_lxml.pyo +${PYSITELIB}/bs4/tests/test_soup.py +${PYSITELIB}/bs4/tests/test_soup.pyc +${PYSITELIB}/bs4/tests/test_soup.pyo +${PYSITELIB}/bs4/tests/test_tree.py +${PYSITELIB}/bs4/tests/test_tree.pyc +${PYSITELIB}/bs4/tests/test_tree.pyo diff --git a/www/py-beautifulsoup4/distinfo b/www/py-beautifulsoup4/distinfo new file mode 100644 index 00000000000..30538e34f06 --- /dev/null +++ b/www/py-beautifulsoup4/distinfo @@ -0,0 +1,5 @@ +$NetBSD: distinfo,v 1.1.1.1 2012/06/03 21:29:57 wiz Exp $ + +SHA1 (beautifulsoup4-4.1.0.tar.gz) = b49de1ab63065ac2c57956d2f99efdf9811e8427 +RMD160 (beautifulsoup4-4.1.0.tar.gz) = 2764a4baba86c0e7c090dd48d3cf206d0c812738 +Size (beautifulsoup4-4.1.0.tar.gz) = 128946 bytes |