Meta Image 1 6 13

Latest version

Report 4.3% (95%CI 2.7–6.1%) and we report a rate of 13.9% (95%CI 6.2–21.5%), which is significantly higher. Finally, Sun et al. Only included studies, but not case reports, as we did, which provided additional consistent findings of the clinical, laboratory, imaging and evolution characteristics of patients with confirmed COVID-19. I need to put meta og:image for homepage, and meta og:image for facebook share to get featured image from posts. Now when im sharing posts, facebook let me choose from these 2 images, i like by default only featured image to be shown on share. Homepage image is not working, on Sharing Debugger i get Provided og:image URL, was not a valid URL. 'If you only have ISO-8859-1 characters in your document, you can Save it as ISO-8859-1 and serve it as UTF-8, because it is a subset' - incorrect. It would be correct if you change 'ISO-8859-1' to 'US-ASCII'. US-ASCII is compatible with UTF-8 because it is a subset, ISO-8859-1 is not.

Released:

A module to parse metadata out of urls and html documents

Project description

MetadataParser

Build Status: ![Python package](https://github.com/jvanasco/metadata_parser/workflows/Python%20package/badge.svg)

MetadataParser is a Python module for pulling metadata out of web documents.

It requires BeautifulSoup , and was largely based on Erik River’s opengraph module ( https://github.com/erikriver/opengraph ).

I needed something more aggressive than Erik’s module, so had to fork.

Installation Recommendation

I strongly suggest you use the requests library version 2.4.3 or newer

This is not required, but it is better. On earlier versions it is possible to have an uncaught DecodeError exception when there is an underlying redirect/404. Recent fixes to requests improve redirect handling, urllib3 and urllib3 errors.

Features

  • it pulls as much metadata out of a document as possible
  • you can set a ‘strategy’ for finding metadata (i.e. only accept opengraph or page attributes)
  • lightweight but functional(!) url validation
  • logging is verbose, but nested under __debug__ statements, so it is compiled away when PYTHONOPTIMIZE is set

Notes

  1. This requires BeautifulSoup 4.
  2. For speed, it will instantiate a BeautifulSoup parser with lxml , and fall back to ‘none’ (the internal pure Python) if it can’t load lxml
  3. URL Validation is not RFC compliant, but tries to be “Real World” compliant
  • It is HIGHLY recommended that you install lxml for usage. It is considerably faster. Considerably faster. *

You should also use a very recent version of lxml. I’ve had problems with segfaults on some versions < 2.3.x ; i would suggest using the most recent 3.x if possible.

The default ‘strategy’ is to look in this order:

Which stands for the following:

You can specify a strategy as a comma-separated list of the above.

The only 2 page elements currently supported are:

‘metadata’ elements are supported by name and property.

The MetadataParser object also wraps some convenience functions , which can be used otherwise , that are designed to turn alleged urls into well formed urls.

For example, you may pull a page:

and that file indicates a canonical url which is simple “/file.html”.

This package will try to ‘remount’ the canonical url to the absolute url of “http://www.example.com/file.html” . It will return None if the end result is not a valid url.

This all happens under-the-hood, and is honestly really useful when dealing with indexers and spiders.

URL Validation

“Real World” URL validation is enabled by default. This is not RFC compliant.

There are a few gaps in the RFCs that allow for “odd behavior”. Just about any use-case for this package will desire/expect rules that parse URLs “in the wild”, not theoretical.

The differences:

  • If an entirely numeric ip address is encountered, it is assumed to be a dot-notation IPV4 and it is checked to have the right amount of valid octets.

    The default behavior is to invalidate these hosts:

    According to RFCs those are valid hostnames that would fail as “IP Addresses” but pass as “Domain Names”. However in the real world, one would never encounter domain names like those.

  • The only non-domain hostname that is allowed, is “localhost”

    The default behavior is to invalidate these hosts

    Those are considered to be valid hosts, and might exist on a local network or custom hosts file. However, they are not part of the public internet.

Although this behavior breaks RFCs, it greatly reduces the number of “False Positives” generated when analyzing internet pages. If you want to include bad data, you can submit a kwarg to MetadataParser.__init__

Handling Bad URLs and Encoded URIs

This library tries to safeguard against a few common situations.

# Encoded URIs and relative urls

Most website publishers will define an image as a URL:

Some will define an image as an encoded URI:

By default, the get_metadata_link() method can be used to ensure a valid link is extracted from the metadata payload:

This method accepts a kwarg allow_encoded_uri (default False) which will return the image without further processing:

Similarly, if a url is local:

The get_metadata_link method will automatically upgrade it onto the domain:

# Poorly Constructed Canonical URLs

Many website publishers implement canonical URLs incorrectly. This package tries to fix that.

By default MetadataParser is constructed with require_public_netloc=True and allow_localhosts=True.

Meta Image 1 6 13 Reasons

This will require somewhat valid ‘public’ network locations in the url.

For example, these will all be valid URLs:

If these known ‘localhost’ urls are not wanted, they can be filtered out with allow_localhosts=False:

There are two convenience methods that can be used to get a canonical url or calculate the effective url:

These both accept an argument require_public_global, which defaults to True.

Assuming we have the following content on the url http://example.com/path/to/foo:

By default, versions 0.9.0 and later will detect ‘localhost:8000’ as an improper canonical url, and remount the local part “/alt-path/to/foo” onto the domain that served the file. The vast majority of times this ‘behavior’ has been encountered, this is the intended canonical:

In contrast, versions 0.8.3 and earlier will not catch this situation:

In order to preserve the earlier behavior, just submit require_public_global=False:

Handling Bad Data

Many CMS systems (and developers) create malformed content or incorrect document identifiers. When this happens, the BeautifulSoup parser will lose data or move it into an unexpected place.

There are two arguments that can help you analyze this data:

  • force_doctype:

force_doctype=True will try to replace the identified doctype with “html” via regex. This will often make the input data usable by BS4.

  • search_head_only:

search_head_only=False will not limit the search path to the “<head>” element. This will have a slight performance hit and will incorporate data from CMS/User content, not just templates/Site-Operators.

WARNING

1.0 will be a complete API overhaul. pin your releases to avoid sadness.

Version 0.9.19 Breaking Changes

Issue #12 exposed some flaws in the existing package

1. MetadataParser.get_metadatas replaces MetadataParser.get_metadata

Until version 0.9.19, the recommended way to get metadata was to use get_metadata which will either return a string (or None).

Starting with version 0.9.19, the recommended way to get metadata is to use get_metadatas which will always return a list (or None).

This change was made because the library incorrectly stored a single metadata key value when there were duplicates.

2. The ParsedResult payload stores mixed content and tracks it’s version————————————————————————

Many users (including the maintainer) archive the parsed metadata. After testing a variety of payloads with an all-list format and a mixed format (string or list), a mixed format had a much smaller payload size with a negligible performance hit. A new _v attribute tracks the payload version. In the future, payloads without a _v attribute will be interpreted as the pre-versioning format.

3. DublinCore payloads might be a dict

Tests were added to handle dublincore data. An extra attribute may be needed to properly represent the payload, so always returning a dict with at least a name+content (and possibly lang or scheme is the best approach.

Usage

Until version 0.9.19, the recommended way to get metadata was to use get_metadata which will return a string (or None):

From an URL:

From HTML:

Malformed Data

It is very common to find malformed data. As of version 0.9.20 the following methods should be used to allow malformed presentation:

or:

The above options will support parsing common malformed options. Currently this only looks at alternate (improper) ways of producing twitter tags, but may be expanded.

Notes

when building on Python3, a static toplevel directory may be needed

Release historyRelease notifications RSS feed

0.10.4

Meta

0.10.0

0.9.23

0.9.22

0.9.21

0.9.20

0.9.19

0.9.18

0.9.17

0.9.16

Meta Image 1 6 13 Commentary

0.9.15

0.9.14

0.9.12

0.9.11

0.9.10

0.9.7

0.9.6

0.9.5

0.9.4

0.9.3

0.9.1

0.9.0

0.8.3

0.8.1

0.8.0

0.7.4

0.7.3

0.7.2

0.7.1

Meta

0.7.0

0.6.18

0.6.17

0.6.16

0.6.15

0.6.14

0.6.11

0.6.10

0.6.9

0.6.8

0.6.7

0.6.6

0.6.5

0.6.3

0.6.0

0.5.8

0.5.6

0.5.4

0.4.13

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for metadata-parser, version 0.10.4
Filename, sizeFile typePython versionUpload dateHashes
Filename, size metadata_parser-0.10.4.tar.gz (45.3 kB) File type Source Python version None Upload dateHashes
Close

Hashes for metadata_parser-0.10.4.tar.gz

Hashes for metadata_parser-0.10.4.tar.gz
AlgorithmHash digest
SHA256d427dfb3d0005d1dd78bdff2eb8bcfee23801e95e2b8fb58faf105ee78fc7531
MD5dcadd864af6d3d4d2316805b91da120c
BLAKE2-256a19fcce147a839cecc5bf9aea8c2567ad91afaba0c808435a2596b9e0e285201
You are not logged in and are editing as a guest. If you want to be able to save and store your charts for future use and editing, you must first create a free account and login -- prior to working on your charts.
Invalid Number
(Separate data with either commas,commas w/ slice name,each numberon a line oreach number on a line w/ slice name)
Pasted format is not correct or one of your values is not a number.
{{errorText}}

abcdefghijklmnopqrstuvwxyz

ABCDEFGHIJKLMNOPQRSTUVWXYZ

1234567890