1
0
mirror of https://code.hackerspace.pl/q3k/youtube-dl synced 2025-03-16 11:43:02 +00:00
Conflicts:
	.gitignore
	LATEST_VERSION
	Makefile
	youtube-dl
	youtube-dl.exe
	youtube_dl/InfoExtractors.py
	youtube_dl/__init__.py
This commit is contained in:
Jeff Crouse 2013-01-05 15:03:54 -05:00
commit 258d5850c9
43 changed files with 6464 additions and 4994 deletions

24
.gitignore vendored
View File

@ -1,17 +1,19 @@
*.pyc *.pyc
*.pyo *.pyo
*~ *~
*.DS_Store
wine-py2exe/ wine-py2exe/
py2exe.log py2exe.log
youtube-dl *.kate-swp
build/
dist/
MANIFEST
README.txt
youtube-dl.1 youtube-dl.1
LATEST_VERSION youtube-dl.bash-completion
youtube-dl
#OS X youtube-dl.exe
.DS_Store youtube-dl.tar.gz
.AppleDouble .coverage
.LSOverride cover/
Icon updates_key.pem
._*
.Spotlight-V100
.Trashes

17
.tarignore Normal file
View File

@ -0,0 +1,17 @@
updates_key.pem
*.pyc
*.pyo
youtube-dl.exe
wine-py2exe/
py2exe.log
*.kate-swp
build/
dist/
MANIFEST
*.DS_Store
youtube-dl.tar.gz
.coverage
cover/
__pycache__/
.git/
*~

View File

@ -1,9 +1,14 @@
language: python language: python
#specify the python version
python: python:
- "2.6" - "2.6"
- "2.7" - "2.7"
#command to install the setup - "3.3"
install: script: nosetests test --verbose
# command to run tests notifications:
script: nosetests test --nocapture email:
- filippo.valsorda@gmail.com
- phihag@phihag.de
irc:
channels:
- "irc.freenode.org#youtube-dl"
skip_join: true

14
CHANGELOG Normal file
View File

@ -0,0 +1,14 @@
2013.01.02 Codename: GIULIA
* Add support for ComedyCentral clips <nto>
* Corrected Vimeo description fetching <Nick Daniels>
* Added the --no-post-overwrites argument <Barbu Paul - Gheorghe>
* --verbose offers more environment info
* New info_dict field: uploader_id
* New updates system, with signature checking
* New IEs: NBA, JustinTV, FunnyOrDie, TweetReel, Steam, Ustream
* Fixed IEs: BlipTv
* Fixed for Python 3 IEs: Xvideo, Youku, XNXX, Dailymotion, Vimeo, InfoQ
* Simplified IEs and test code
* Various (Python 3 and other) fixes
* Revamped and expanded tests

1
LATEST_VERSION Normal file
View File

@ -0,0 +1 @@
2012.10.09

24
LICENSE Normal file
View File

@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

3
MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include README.md
include test/*.py
include test/*.json

View File

@ -1,8 +1,7 @@
all: youtube-dl README.md youtube-dl.1 youtube-dl.bash-completion LATEST_VERSION all: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion
# TODO: re-add youtube-dl.exe, and make sure it's 1. safe and 2. doesn't need sudo
clean: clean:
rm -f youtube-dl youtube-dl.exe youtube-dl.1 LATEST_VERSION youtube_dl/*.pyc rm -rf youtube-dl youtube-dl.exe youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/
PREFIX=/usr/local PREFIX=/usr/local
BINDIR=$(PREFIX)/bin BINDIR=$(PREFIX)/bin
@ -17,43 +16,32 @@ install: youtube-dl youtube-dl.1 youtube-dl.bash-completion
install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d
install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl
.PHONY: all clean install youtube-dl.bash-completion test:
# TODO un-phony README.md and youtube-dl.bash_completion by reading from .in files and generating from them #nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
nosetests --verbose test
.PHONY: all clean install test
youtube-dl: youtube_dl/*.py youtube-dl: youtube_dl/*.py
zip --quiet --junk-paths youtube-dl youtube_dl/*.py zip --quiet youtube-dl youtube_dl/*.py
zip --quiet --junk-paths youtube-dl youtube_dl/__main__.py
echo '#!/usr/bin/env python' > youtube-dl echo '#!/usr/bin/env python' > youtube-dl
cat youtube-dl.zip >> youtube-dl cat youtube-dl.zip >> youtube-dl
rm youtube-dl.zip rm youtube-dl.zip
chmod a+x youtube-dl chmod a+x youtube-dl
youtube-dl.exe: youtube_dl/*.py
bash devscripts/wine-py2exe.sh build_exe.py
README.md: youtube_dl/*.py README.md: youtube_dl/*.py
@options=$$(COLUMNS=80 python -m youtube_dl --help | sed -e '1,/.*General Options.*/ d' -e 's/^\W\{2\}\(\w\)/## \1/') && \ COLUMNS=80 python -m youtube_dl --help | python devscripts/make_readme.py
header=$$(sed -e '/.*# OPTIONS/,$$ d' README.md) && \
footer=$$(sed -e '1,/.*# FAQ/ d' README.md) && \
echo "$${header}" > README.md && \
echo >> README.md && \
echo '# OPTIONS' >> README.md && \
echo "$${options}" >> README.md&& \
echo >> README.md && \
echo '# FAQ' >> README.md && \
echo "$${footer}" >> README.md
youtube-dl.1: README.txt: README.md
pandoc -s -w man README.md -o youtube-dl.1 pandoc -f markdown -t plain README.md -o README.txt
youtube-dl.bash-completion: youtube-dl.1: README.md
@options=`egrep -o '(--[a-z-]+) ' README.md | sort -u | xargs echo` && \ pandoc -s -f markdown -t man README.md -o youtube-dl.1
content=`sed "s/opts=\"[^\"]*\"/opts=\"$${options}\"/g" youtube-dl.bash-completion` && \
echo "$${content}" > youtube-dl.bash-completion
LATEST_VERSION: youtube_dl/__init__.py youtube-dl.bash-completion: youtube_dl/*.py devscripts/bash-completion.in
python -m youtube_dl --version > LATEST_VERSION python devscripts/bash-completion.py
test: youtube-dl.tar.gz: all
nosetests2 --nocapture test tar -cvzf youtube-dl.tar.gz -s "|^./|./youtube-dl/|" \
--exclude-from=".tarignore" -- .
.PHONY: default compile update update-latest update-readme test clean

View File

@ -1,4 +1,4 @@
% youtube-dl(1) % YOUTUBE-DL(1)
# NAME # NAME
youtube-dl youtube-dl
@ -20,6 +20,11 @@ which means you can modify it, redistribute it or use it however you like.
-i, --ignore-errors continue on download errors -i, --ignore-errors continue on download errors
-r, --rate-limit LIMIT download rate limit (e.g. 50k or 44.6m) -r, --rate-limit LIMIT download rate limit (e.g. 50k or 44.6m)
-R, --retries RETRIES number of retries (default is 10) -R, --retries RETRIES number of retries (default is 10)
--buffer-size SIZE size of download buffer (e.g. 1024 or 16k) (default
is 1024)
--no-resize-buffer do not automatically adjust the buffer size. By
default, the buffer size is automatically resized
from an initial value of SIZE.
--dump-user-agent display the current browser identification --dump-user-agent display the current browser identification
--user-agent UA specify a custom user agent --user-agent UA specify a custom user agent
--list-extractors List all supported extractors and the URLs they --list-extractors List all supported extractors and the URLs they
@ -37,16 +42,22 @@ which means you can modify it, redistribute it or use it however you like.
Filesystem Options: Filesystem Options:
-t, --title use title in file name -t, --title use title in file name
--id use video ID in file name --id use video ID in file name
-l, --literal use literal title in file name -l, --literal [deprecated] alias of --title
-A, --auto-number number downloaded files starting from 00000 -A, --auto-number number downloaded files starting from 00000
-o, --output TEMPLATE output filename template. Use %(stitle)s to get the -o, --output TEMPLATE output filename template. Use %(title)s to get the
title, %(uploader)s for the uploader name, title, %(uploader)s for the uploader name,
%(autonumber)s to get an automatically incremented %(uploader_id)s for the uploader nickname if
number, %(ext)s for the filename extension, different, %(autonumber)s to get an automatically
%(upload_date)s for the upload date (YYYYMMDD), incremented number, %(ext)s for the filename
%(extractor)s for the provider (youtube, metacafe, extension, %(upload_date)s for the upload date
etc), %(id)s for the video id and %% for a literal (YYYYMMDD), %(extractor)s for the provider
percent. Use - to output to stdout. (youtube, metacafe, etc), %(id)s for the video id
and %% for a literal percent. Use - to output to
stdout. Can also be used to download to a different
directory, for example with -o '/my/downloads/%(upl
oader)s/%(title)s-%(id)s.%(ext)s' .
--restrict-filenames Restrict filenames to only ASCII characters, and
avoid "&" and spaces in filenames
-a, --batch-file FILE file containing URLs to download ('-' for stdin) -a, --batch-file FILE file containing URLs to download ('-' for stdin)
-w, --no-overwrites do not overwrite files -w, --no-overwrites do not overwrite files
-c, --continue resume partially downloaded files -c, --continue resume partially downloaded files
@ -101,6 +112,34 @@ which means you can modify it, redistribute it or use it however you like.
specific bitrate like 128K (default 5) specific bitrate like 128K (default 5)
-k, --keep-video keeps the video file on disk after the post- -k, --keep-video keeps the video file on disk after the post-
processing; the video is erased by default processing; the video is erased by default
--no-post-overwrites do not overwrite post-processed files; the post-
processed files are overwritten by default
# CONFIGURATION
You can configure youtube-dl by placing default arguments (such as `--extract-audio --no-mtime` to always extract the audio and not copy the mtime) into `/etc/youtube-dl.conf` and/or `~/.local/config/youtube-dl.conf`.
# OUTPUT TEMPLATE
The `-o` option allows users to indicate a template for the output file names. The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences have the format `%(NAME)s`. To clarify, that is a percent symbol followed by a name in parenthesis, followed by a lowercase S. Allowed names are:
- `id`: The sequence will be replaced by the video identifier.
- `url`: The sequence will be replaced by the video URL.
- `uploader`: The sequence will be replaced by the nickname of the person who uploaded the video.
- `upload_date`: The sequence will be replaced by the upload date in YYYYMMDD format.
- `title`: The sequence will be replaced by the video title.
- `ext`: The sequence will be replaced by the appropriate extension (like flv or mp4).
- `epoch`: The sequence will be replaced by the Unix epoch when creating the file.
- `autonumber`: The sequence will be replaced by a five-digit number that will be increased with each download, starting at zero.
The current default template is `%(id)s.%(ext)s`, but that will be switchted to `%(title)s-%(id)s.%(ext)s` (which can be requested with `-t` at the moment).
In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
$ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
$ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
youtube-dl_test_video_.mp4 # A simple file name
# FAQ # FAQ
@ -137,17 +176,9 @@ The error
means you're using an outdated version of Python. Please update to Python 2.6 or 2.7. means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
To run youtube-dl under Python 2.5, you'll have to manually check it out like this:
git clone git://github.com/rg3/youtube-dl.git
cd youtube-dl
python -m youtube_dl --help
Please note that Python 2.5 is not supported anymore.
### What is this binary file? Where has the code gone? ### What is this binary file? Where has the code gone?
Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repo to see the code. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make compile`. Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
### The exe throws a *Runtime error from Visual C++* ### The exe throws a *Runtime error from Visual C++*
@ -166,6 +197,9 @@ Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/i
Please include: Please include:
* Your exact command line, like `youtube-dl -t "http://www.youtube.com/watch?v=uHlDtZ6Oc3s&feature=channel_video_title"`. A common mistake is not to escape the `&`. Putting URLs in quotes should solve this problem. * Your exact command line, like `youtube-dl -t "http://www.youtube.com/watch?v=uHlDtZ6Oc3s&feature=channel_video_title"`. A common mistake is not to escape the `&`. Putting URLs in quotes should solve this problem.
* If possible re-run the command with `--verbose`, and include the full output, it is really helpful to us.
* The output of `youtube-dl --version` * The output of `youtube-dl --version`
* The output of `python --version` * The output of `python --version`
* The name and version of your Operating System ("Ubuntu 11.04 x64" or "Windows 7 x64" is usually enough). * The name and version of your Operating System ("Ubuntu 11.04 x64" or "Windows 7 x64" is usually enough).
For discussions, join us in the irc channel #youtube-dl on freenode.

6
bin/youtube-dl Executable file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env python
import youtube_dl
if __name__ == '__main__':
youtube_dl.main()

View File

@ -1,48 +0,0 @@
from distutils.core import setup
import py2exe
import sys, os
"""This will create an exe that needs Microsoft Visual C++ 2008 Redistributable Package"""
# If run without args, build executables
if len(sys.argv) == 1:
sys.argv.append("py2exe")
# os.chdir(os.path.dirname(os.path.abspath(sys.argv[0]))) # conflict with wine-py2exe.sh
sys.path.append('./youtube_dl')
options = {
"bundle_files": 1,
"compressed": 1,
"optimize": 2,
"dist_dir": '.',
"dll_excludes": ['w9xpopen.exe']
}
console = [{
"script":"./youtube_dl/__main__.py",
"dest_base": "youtube-dl",
}]
init_file = open('./youtube_dl/__init__.py')
for line in init_file.readlines():
if line.startswith('__version__'):
version = line[11:].strip(" ='\n")
break
else:
version = ''
setup(name='youtube-dl',
version=version,
description='Small command-line program to download videos from YouTube.com and other video sites',
url='https://github.com/rg3/youtube-dl',
packages=['youtube_dl'],
console = console,
options = {"py2exe": options},
zipfile = None,
)
import shutil
shutil.rmtree("build")

View File

@ -0,0 +1,14 @@
__youtube-dl()
{
local cur prev opts
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts="{{flags}}"
if [[ ${cur} == * ]] ; then
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
fi
}
complete -F __youtube-dl youtube-dl

26
devscripts/bash-completion.py Executable file
View File

@ -0,0 +1,26 @@
#!/usr/bin/env python
import os
from os.path import dirname as dirn
import sys
sys.path.append(dirn(dirn((os.path.abspath(__file__)))))
import youtube_dl
BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
def build_completion(opt_parser):
opts_flag = []
for group in opt_parser.option_groups:
for option in group.option_list:
#for every long flag
opts_flag.append(option.get_opt_string())
with open(BASH_COMPLETION_TEMPLATE) as f:
template = f.read()
with open(BASH_COMPLETION_FILE, "w") as f:
#just using the special char
filled_template = template.replace("{{flags}}", " ".join(opts_flag))
f.write(filled_template)
parser = youtube_dl.parseOpts()[0]
build_completion(parser)

View File

@ -0,0 +1,33 @@
#!/usr/bin/env python3
import json
import sys
import hashlib
import urllib.request
if len(sys.argv) <= 1:
print('Specify the version number as parameter')
sys.exit()
version = sys.argv[1]
with open('update/LATEST_VERSION', 'w') as f:
f.write(version)
versions_info = json.load(open('update/versions.json'))
if 'signature' in versions_info:
del versions_info['signature']
new_version = {}
filenames = {'bin': 'youtube-dl', 'exe': 'youtube-dl.exe', 'tar': 'youtube-dl-%s.tar.gz' % version}
for key, filename in filenames.items():
print('Downloading and checksumming %s...' %filename)
url = 'http://youtube-dl.org/downloads/%s/%s' % (version, filename)
data = urllib.request.urlopen(url).read()
sha256sum = hashlib.sha256(data).hexdigest()
new_version[key] = (url, sha256sum)
versions_info['versions'][version] = new_version
versions_info['latest'] = version
json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True)

View File

@ -0,0 +1,32 @@
#!/usr/bin/env python3
import hashlib
import shutil
import subprocess
import tempfile
import urllib.request
import json
versions_info = json.load(open('update/versions.json'))
version = versions_info['latest']
URL = versions_info['versions'][version]['bin'][0]
data = urllib.request.urlopen(URL).read()
# Read template page
with open('download.html.in', 'r', encoding='utf-8') as tmplf:
template = tmplf.read()
md5sum = hashlib.md5(data).hexdigest()
sha1sum = hashlib.sha1(data).hexdigest()
sha256sum = hashlib.sha256(data).hexdigest()
template = template.replace('@PROGRAM_VERSION@', version)
template = template.replace('@PROGRAM_URL@', URL)
template = template.replace('@PROGRAM_MD5SUM@', md5sum)
template = template.replace('@PROGRAM_SHA1SUM@', sha1sum)
template = template.replace('@PROGRAM_SHA256SUM@', sha256sum)
template = template.replace('@EXE_URL@', versions_info['versions'][version]['exe'][0])
template = template.replace('@EXE_SHA256SUM@', versions_info['versions'][version]['exe'][1])
template = template.replace('@TAR_URL@', versions_info['versions'][version]['tar'][0])
template = template.replace('@TAR_SHA256SUM@', versions_info['versions'][version]['tar'][1])
with open('download.html', 'w', encoding='utf-8') as dlf:
dlf.write(template)

View File

@ -0,0 +1,28 @@
#!/usr/bin/env python3
import rsa
import json
from binascii import hexlify
versions_info = json.load(open('update/versions.json'))
if 'signature' in versions_info:
del versions_info['signature']
print('Enter the PKCS1 private key, followed by a blank line:')
privkey = ''
while True:
try:
line = input()
except EOFError:
break
if line == '':
break
privkey += line + '\n'
privkey = bytes(privkey, 'ascii')
privkey = rsa.PrivateKey.load_pkcs1(privkey)
signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).encode('utf-8'), privkey, 'SHA-256')).decode()
print('signature: ' + signature)
versions_info['signature'] = signature
json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True)

View File

@ -0,0 +1,21 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import with_statement
import datetime
import glob
import io # For Python 2 compatibilty
import os
import re
year = str(datetime.datetime.now().year)
for fn in glob.glob('*.html*'):
with io.open(fn, encoding='utf-8') as f:
content = f.read()
newc = re.sub(u'(?P<copyright>Copyright © 2006-)(?P<year>[0-9]{4})', u'Copyright © 2006-' + year, content)
if content != newc:
tmpFn = fn + '.part'
with io.open(tmpFn, 'wt', encoding='utf-8') as outf:
outf.write(newc)
os.rename(tmpFn, fn)

20
devscripts/make_readme.py Executable file
View File

@ -0,0 +1,20 @@
import sys
import re
README_FILE = 'README.md'
helptext = sys.stdin.read()
with open(README_FILE) as f:
oldreadme = f.read()
header = oldreadme[:oldreadme.index('# OPTIONS')]
footer = oldreadme[oldreadme.index('# CONFIGURATION'):]
options = helptext[helptext.index(' General Options:')+19:]
options = re.sub(r'^ (\w.+)$', r'## \1', options, flags=re.M)
options = '# OPTIONS\n' + options + '\n'
with open(README_FILE, 'w') as f:
f.write(header)
f.write(options)
f.write(footer)

View File

@ -1,11 +1,85 @@
#!/bin/sh #!/bin/sh
# IMPORTANT: the following assumptions are made
# * the GH repo is on the origin remote
# * the gh-pages branch is named so locally
# * the git config user.signingkey is properly set
# You will need
# pip install coverage nose rsa
# TODO
# release notes
# make hash on local files
set -e
if [ -z "$1" ]; then echo "ERROR: specify version number like this: $0 1994.09.06"; exit 1; fi if [ -z "$1" ]; then echo "ERROR: specify version number like this: $0 1994.09.06"; exit 1; fi
version="$1" version="$1"
if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi
if [ ! -z "`git status --porcelain`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi if [ ! -z "`git status --porcelain | grep -v CHANGELOG`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/__init__.py if [ ! -f "updates_key.pem" ]; then echo 'ERROR: updates_key.pem missing'; exit 1; fi
make all
git add -A echo "\n### First of all, testing..."
make clean
nosetests --with-coverage --cover-package=youtube_dl --cover-html test || exit 1
echo "\n### Changing version in version.py..."
sed -i~ "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
echo "\n### Committing CHANGELOG README.md and youtube_dl/version.py..."
make README.md
git add CHANGELOG README.md youtube_dl/version.py
git commit -m "release $version" git commit -m "release $version"
git tag -m "Release $version" "$version"
echo "\n### Now tagging, signing and pushing..."
git tag -s -m "Release $version" "$version"
git show "$version"
read -p "Is it good, can I push? (y/n) " -n 1
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
echo
MASTER=$(git rev-parse --abbrev-ref HEAD)
git push origin $MASTER:master
git push origin "$version"
echo "\n### OK, now it is time to build the binaries..."
REV=$(git rev-parse HEAD)
make youtube-dl youtube-dl.tar.gz
wget "http://jeromelaheurte.net:8142/download/rg3/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe || \
wget "http://jeromelaheurte.net:8142/build/rg3/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe
mkdir -p "update_staging/$version"
mv youtube-dl youtube-dl.exe "update_staging/$version"
mv youtube-dl.tar.gz "update_staging/$version/youtube-dl-$version.tar.gz"
RELEASE_FILES=youtube-dl youtube-dl.exe youtube-dl-$version.tar.gz
(cd update_staging/$version/ && md5sum $RELEASE_FILES > MD5SUMS)
(cd update_staging/$version/ && sha1sum $RELEASE_FILES > SHA1SUMS)
(cd update_staging/$version/ && sha256sum $RELEASE_FILES > SHA2-256SUMS)
(cd update_staging/$version/ && sha512sum $RELEASE_FILES > SHA2-512SUMS)
git checkout HEAD -- youtube-dl youtube-dl.exe
echo "\n### Signing and uploading the new binaries to youtube-dl.org..."
for f in $RELEASE_FILES; do gpg --detach-sig "update_staging/$version/$f"; done
scp -r "update_staging/$version" ytdl@youtube-dl.org:html/downloads/
rm -r update_staging
echo "\n### Now switching to gh-pages..."
git checkout gh-pages
git checkout "$MASTER" -- devscripts/gh-pages/
git reset devscripts/gh-pages/
devscripts/gh-pages/add-version.py $version
devscripts/gh-pages/sign-versions.py < updates_key.pem
devscripts/gh-pages/generate-download.py
devscripts/gh-pages/update-copyright.py
rm -r test_coverage
mv cover test_coverage
git add *.html *.html.in update test_coverage
git commit -m "release $version"
git show HEAD
read -p "Is it good, can I push? (y/n) " -n 1
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
echo
git push origin gh-pages
echo "\n### DONE!"
rm -r devscripts
git checkout $MASTER

View File

@ -0,0 +1,40 @@
#!/usr/bin/env python
import sys, os
try:
import urllib.request as compat_urllib_request
except ImportError: # Python 2
import urllib2 as compat_urllib_request
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n')
try:
raw_input()
except NameError: # Python 3
input()
filename = sys.argv[0]
API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads"
BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl"
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
try:
urlh = compat_urllib_request.urlopen(BIN_URL)
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
try:
with open(filename, 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

View File

@ -0,0 +1,12 @@
from distutils.core import setup
import py2exe
py2exe_options = {
"bundle_files": 1,
"compressed": 1,
"optimize": 2,
"dist_dir": '.',
"dll_excludes": ['w9xpopen.exe']
}
setup(console=['youtube-dl.py'], options={ "py2exe": py2exe_options }, zipfile=None)

View File

@ -0,0 +1,102 @@
#!/usr/bin/env python
import sys, os
import urllib2
import json, hashlib
def rsa_verify(message, signature, key):
from struct import pack
from hashlib import sha256
from sys import version_info
def b(x):
if version_info[0] == 2: return x
else: return x.encode('latin1')
assert(type(message) == type(b('')))
block_size = 0
n = key[0]
while n:
block_size += 1
n >>= 8
signature = pow(int(signature, 16), key[1], key[0])
raw_bytes = []
while signature:
raw_bytes.insert(0, pack("B", signature & 0xFF))
signature >>= 8
signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
if signature[0:2] != b('\x00\x01'): return False
signature = signature[2:]
if not b('\x00') in signature: return False
signature = signature[signature.index(b('\x00'))+1:]
if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
signature = signature[19:]
if signature != sha256(message).digest(): return False
return True
sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n')
sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n')
sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n')
raw_input()
filename = sys.argv[0]
UPDATE_URL = "http://rg3.github.com/youtube-dl/update/"
VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
JSON_URL = UPDATE_URL + 'versions.json'
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
exe = os.path.abspath(filename)
directory = os.path.dirname(exe)
if not os.access(directory, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % directory)
try:
versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8')
versions_info = json.loads(versions_info)
except:
sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.')
if not 'signature' in versions_info:
sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.')
signature = versions_info['signature']
del versions_info['signature']
if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY):
sys.exit(u'ERROR: the versions file signature is invalid. Aborting.')
version = versions_info['versions'][versions_info['latest']]
try:
urlh = urllib2.urlopen(version['exe'][0])
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
sys.exit('ERROR: unable to download latest version')
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
if newcontent_hash != version['exe'][1]:
sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.')
try:
with open(exe + '.new', 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
sys.exit(u'ERROR: unable to write the new version')
try:
bat = os.path.join(directory, 'youtube-dl-updater.bat')
b = open(bat, 'w')
b.write("""
echo Updating youtube-dl...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "%s.new" "%s"
del "%s"
\n""" %(exe, exe, bat))
b.close()
os.startfile(bat)
except (IOError, OSError) as err:
sys.exit('ERROR: unable to overwrite current version')
sys.stderr.write(u'Done! Now you can run youtube-dl.\n')

74
setup.py Normal file
View File

@ -0,0 +1,74 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
from distutils.core import setup
import pkg_resources
import sys
try:
import py2exe
"""This will create an exe that needs Microsoft Visual C++ 2008 Redistributable Package"""
except ImportError:
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
print("Cannot import py2exe", file=sys.stderr)
exit(1)
py2exe_options = {
"bundle_files": 1,
"compressed": 1,
"optimize": 2,
"dist_dir": '.',
"dll_excludes": ['w9xpopen.exe']
}
py2exe_console = [{
"script": "./youtube_dl/__main__.py",
"dest_base": "youtube-dl",
}]
py2exe_params = {
'console': py2exe_console,
'options': { "py2exe": py2exe_options },
'zipfile': None
}
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
params = py2exe_params
else:
params = {
'scripts': ['bin/youtube-dl'],
'data_files': [('etc/bash_completion.d', ['youtube-dl.bash-completion']), # Installing system-wide would require sudo...
('share/doc/youtube_dl', ['README.txt']),
('share/man/man1/', ['youtube-dl.1'])]
}
# Get the version from youtube_dl/version.py without importing the package
exec(compile(open('youtube_dl/version.py').read(), 'youtube_dl/version.py', 'exec'))
setup(
name = 'youtube_dl',
version = __version__,
description = 'YouTube video downloader',
long_description = 'Small command-line program to download videos from YouTube.com and other video sites.',
url = 'https://github.com/rg3/youtube-dl',
author = 'Ricardo Garcia',
maintainer = 'Philipp Hagemeister',
maintainer_email = 'phihag@phihag.de',
packages = ['youtube_dl'],
# Provokes warning on most systems (why?!)
#test_suite = 'nose.collector',
#test_requires = ['nosetest'],
classifiers = [
"Topic :: Multimedia :: Video",
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"License :: Public Domain",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3"
],
**params
)

View File

@ -1 +1,40 @@
{"username": null, "listformats": null, "skip_download": false, "usenetrc": false, "max_downloads": null, "noprogress": false, "forcethumbnail": false, "forceformat": false, "format_limit": null, "ratelimit": null, "nooverwrites": false, "forceurl": false, "writeinfojson": false, "simulate": false, "playliststart": 1, "continuedl": true, "password": null, "prefer_free_formats": false, "nopart": false, "retries": 10, "updatetime": true, "consoletitle": false, "verbose": true, "forcefilename": false, "ignoreerrors": false, "logtostderr": false, "format": null, "subtitleslang": null, "quiet": false, "outtmpl": "%(id)s.%(ext)s", "rejecttitle": null, "playlistend": -1, "writedescription": false, "forcetitle": false, "forcedescription": false, "writesubtitles": false, "matchtitle": null} {
"consoletitle": false,
"continuedl": true,
"forcedescription": false,
"forcefilename": false,
"forceformat": false,
"forcethumbnail": false,
"forcetitle": false,
"forceurl": false,
"format": null,
"format_limit": null,
"ignoreerrors": false,
"listformats": null,
"logtostderr": false,
"matchtitle": null,
"max_downloads": null,
"nooverwrites": false,
"nopart": false,
"noprogress": false,
"outtmpl": "%(id)s.%(ext)s",
"password": null,
"playlistend": -1,
"playliststart": 1,
"prefer_free_formats": false,
"quiet": false,
"ratelimit": null,
"rejecttitle": null,
"retries": 10,
"simulate": false,
"skip_download": false,
"subtitleslang": null,
"test": true,
"updatetime": true,
"usenetrc": false,
"username": null,
"verbose": true,
"writedescription": false,
"writeinfojson": true,
"writesubtitles": false
}

27
test/test_all_urls.py Normal file
View File

@ -0,0 +1,27 @@
#!/usr/bin/env python
import sys
import unittest
# Allow direct execution
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.InfoExtractors import YoutubeIE, YoutubePlaylistIE
class TestAllURLsMatching(unittest.TestCase):
def test_youtube_playlist_matching(self):
self.assertTrue(YoutubePlaylistIE().suitable(u'ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8'))
self.assertTrue(YoutubePlaylistIE().suitable(u'PL63F0C78739B09958'))
self.assertFalse(YoutubePlaylistIE().suitable(u'PLtS2H6bU1M'))
def test_youtube_matching(self):
self.assertTrue(YoutubeIE().suitable(u'PLtS2H6bU1M'))
def test_youtube_extract(self):
self.assertEqual(YoutubeIE()._extract_id('http://www.youtube.com/watch?&v=BaW_jenozKc'), 'BaW_jenozKc')
self.assertEqual(YoutubeIE()._extract_id('https://www.youtube.com/watch?&v=BaW_jenozKc'), 'BaW_jenozKc')
self.assertEqual(YoutubeIE()._extract_id('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc'), 'BaW_jenozKc')
if __name__ == '__main__':
unittest.main()

View File

@ -1,93 +1,125 @@
#!/usr/bin/env python2 #!/usr/bin/env python
import unittest
import errno
import hashlib import hashlib
import io
import os import os
import json import json
import unittest
import sys
import hashlib
import socket
from youtube_dl.FileDownloader import FileDownloader # Allow direct execution
from youtube_dl.InfoExtractors import YoutubeIE, DailymotionIE sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.InfoExtractors import MetacafeIE, BlipTVIE
import youtube_dl.FileDownloader
import youtube_dl.InfoExtractors
from youtube_dl.utils import *
DEF_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'tests.json')
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), "parameters.json")
# General configuration (from __init__, not very elegant...)
jar = compat_cookiejar.CookieJar()
cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
proxy_handler = compat_urllib_request.ProxyHandler()
opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
compat_urllib_request.install_opener(opener)
def _try_rm(filename):
""" Remove a file if it exists """
try:
os.remove(filename)
except OSError as ose:
if ose.errno != errno.ENOENT:
raise
class FileDownloader(youtube_dl.FileDownloader):
def __init__(self, *args, **kwargs):
self.to_stderr = self.to_screen
self.processed_info_dicts = []
return youtube_dl.FileDownloader.__init__(self, *args, **kwargs)
def process_info(self, info_dict):
self.processed_info_dicts.append(info_dict)
return youtube_dl.FileDownloader.process_info(self, info_dict)
def _file_md5(fn):
with open(fn, 'rb') as f:
return hashlib.md5(f.read()).hexdigest()
with io.open(DEF_FILE, encoding='utf-8') as deff:
defs = json.load(deff)
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf)
class DownloadTest(unittest.TestCase): class TestDownload(unittest.TestCase):
PARAMETERS_FILE = "test/parameters.json" def setUp(self):
#calculated with md5sum: self.parameters = parameters
#md5sum (GNU coreutils) 8.19 self.defs = defs
YOUTUBE_SIZE = 1993883 ### Dynamically generate tests
YOUTUBE_URL = "http://www.youtube.com/watch?v=BaW_jenozKc" def generator(test_case):
YOUTUBE_FILE = "BaW_jenozKc.mp4"
DAILYMOTION_MD5 = "d363a50e9eb4f22ce90d08d15695bb47" def test_template(self):
DAILYMOTION_URL = "http://www.dailymotion.com/video/x33vw9_tutoriel-de-youtubeur-dl-des-video_tech" ie = getattr(youtube_dl.InfoExtractors, test_case['name'] + 'IE')
DAILYMOTION_FILE = "x33vw9.mp4" if not ie._WORKING:
print('Skipping: IE marked as not _WORKING')
return
if 'playlist' not in test_case and not test_case['file']:
print('Skipping: No output file specified')
return
if 'skip' in test_case:
print('Skipping: {0}'.format(test_case['skip']))
return
METACAFE_SIZE = 5754305 params = self.parameters.copy()
METACAFE_URL = "http://www.metacafe.com/watch/yt-_aUehQsCQtM/the_electric_company_short_i_pbs_kids_go/" params.update(test_case.get('params', {}))
METACAFE_FILE = "_aUehQsCQtM.flv"
BLIP_MD5 = "93c24d2f4e0782af13b8a7606ea97ba7" fd = FileDownloader(params)
BLIP_URL = "http://blip.tv/cbr/cbr-exclusive-gotham-city-imposters-bats-vs-jokerz-short-3-5796352" fd.add_info_extractor(ie())
BLIP_FILE = "5779306.m4v" for ien in test_case.get('add_ie', []):
fd.add_info_extractor(getattr(youtube_dl.InfoExtractors, ien + 'IE')())
XVIDEO_MD5 = "" test_cases = test_case.get('playlist', [test_case])
XVIDEO_URL = "" for tc in test_cases:
XVIDEO_FILE = "" _try_rm(tc['file'])
_try_rm(tc['file'] + '.part')
_try_rm(tc['file'] + '.info.json')
try:
fd.download([test_case['url']])
for tc in test_cases:
if not test_case.get('params', {}).get('skip_download', False):
self.assertTrue(os.path.exists(tc['file']))
self.assertTrue(os.path.exists(tc['file'] + '.info.json'))
if 'md5' in tc:
md5_for_file = _file_md5(tc['file'])
self.assertEqual(md5_for_file, tc['md5'])
with io.open(tc['file'] + '.info.json', encoding='utf-8') as infof:
info_dict = json.load(infof)
for (info_field, value) in tc.get('info_dict', {}).items():
if value.startswith('md5:'):
md5_info_value = hashlib.md5(info_dict.get(info_field, '')).hexdigest()
self.assertEqual(value[3:], md5_info_value)
else:
self.assertEqual(value, info_dict.get(info_field))
finally:
for tc in test_cases:
_try_rm(tc['file'])
_try_rm(tc['file'] + '.part')
_try_rm(tc['file'] + '.info.json')
return test_template
### And add them to TestDownload
for test_case in defs:
test_method = generator(test_case)
test_method.__name__ = "test_{0}".format(test_case["name"])
setattr(TestDownload, test_method.__name__, test_method)
del test_method
def test_youtube(self): if __name__ == '__main__':
#let's download a file from youtube unittest.main()
with open(DownloadTest.PARAMETERS_FILE) as f:
fd = FileDownloader(json.load(f))
fd.add_info_extractor(YoutubeIE())
fd.download([DownloadTest.YOUTUBE_URL])
self.assertTrue(os.path.exists(DownloadTest.YOUTUBE_FILE))
self.assertEqual(os.path.getsize(DownloadTest.YOUTUBE_FILE), DownloadTest.YOUTUBE_SIZE)
def test_dailymotion(self):
with open(DownloadTest.PARAMETERS_FILE) as f:
fd = FileDownloader(json.load(f))
fd.add_info_extractor(DailymotionIE())
fd.download([DownloadTest.DAILYMOTION_URL])
self.assertTrue(os.path.exists(DownloadTest.DAILYMOTION_FILE))
md5_down_file = md5_for_file(DownloadTest.DAILYMOTION_FILE)
self.assertEqual(md5_down_file, DownloadTest.DAILYMOTION_MD5)
def test_metacafe(self):
#this emulate a skip,to be 2.6 compatible
with open(DownloadTest.PARAMETERS_FILE) as f:
fd = FileDownloader(json.load(f))
fd.add_info_extractor(MetacafeIE())
fd.add_info_extractor(YoutubeIE())
fd.download([DownloadTest.METACAFE_URL])
self.assertTrue(os.path.exists(DownloadTest.METACAFE_FILE))
self.assertEqual(os.path.getsize(DownloadTest.METACAFE_FILE), DownloadTest.METACAFE_SIZE)
def test_blip(self):
with open(DownloadTest.PARAMETERS_FILE) as f:
fd = FileDownloader(json.load(f))
fd.add_info_extractor(BlipTVIE())
fd.download([DownloadTest.BLIP_URL])
self.assertTrue(os.path.exists(DownloadTest.BLIP_FILE))
md5_down_file = md5_for_file(DownloadTest.BLIP_FILE)
self.assertEqual(md5_down_file, DownloadTest.BLIP_MD5)
def tearDown(self):
if os.path.exists(DownloadTest.YOUTUBE_FILE):
os.remove(DownloadTest.YOUTUBE_FILE)
if os.path.exists(DownloadTest.DAILYMOTION_FILE):
os.remove(DownloadTest.DAILYMOTION_FILE)
if os.path.exists(DownloadTest.METACAFE_FILE):
os.remove(DownloadTest.METACAFE_FILE)
if os.path.exists(DownloadTest.BLIP_FILE):
os.remove(DownloadTest.BLIP_FILE)
def md5_for_file(filename, block_size=2**20):
with open(filename) as f:
md5 = hashlib.md5()
while True:
data = f.read(block_size)
if not data:
break
md5.update(data)
return md5.hexdigest()

26
test/test_execution.py Normal file
View File

@ -0,0 +1,26 @@
import unittest
import sys
import os
import subprocess
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
try:
_DEV_NULL = subprocess.DEVNULL
except AttributeError:
_DEV_NULL = open(os.devnull, 'wb')
class TestExecution(unittest.TestCase):
def test_import(self):
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
def test_module_exec(self):
if sys.version_info >= (2,7): # Python 2.6 doesn't support package execution
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
def test_main_exec(self):
subprocess.check_call([sys.executable, 'youtube_dl/__main__.py', '--version'], cwd=rootDir, stdout=_DEV_NULL)
if __name__ == '__main__':
unittest.main()

View File

@ -1,15 +1,25 @@
# -*- coding: utf-8 -*- #!/usr/bin/env python
# Various small unit tests # Various small unit tests
import sys
import unittest import unittest
# Allow direct execution
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
#from youtube_dl.utils import htmlentity_transform #from youtube_dl.utils import htmlentity_transform
from youtube_dl.utils import timeconvert from youtube_dl.utils import timeconvert
from youtube_dl.utils import sanitize_filename from youtube_dl.utils import sanitize_filename
from youtube_dl.utils import unescapeHTML from youtube_dl.utils import unescapeHTML
from youtube_dl.utils import orderedSet from youtube_dl.utils import orderedSet
if sys.version_info < (3, 0):
_compat_str = lambda b: b.decode('unicode-escape')
else:
_compat_str = lambda s: s
class TestUtil(unittest.TestCase): class TestUtil(unittest.TestCase):
def test_timeconvert(self): def test_timeconvert(self):
@ -17,31 +27,74 @@ class TestUtil(unittest.TestCase):
self.assertTrue(timeconvert('bougrg') is None) self.assertTrue(timeconvert('bougrg') is None)
def test_sanitize_filename(self): def test_sanitize_filename(self):
self.assertEqual(sanitize_filename(u'abc'), u'abc') self.assertEqual(sanitize_filename('abc'), 'abc')
self.assertEqual(sanitize_filename(u'abc_d-e'), u'abc_d-e') self.assertEqual(sanitize_filename('abc_d-e'), 'abc_d-e')
self.assertEqual(sanitize_filename(u'123'), u'123') self.assertEqual(sanitize_filename('123'), '123')
self.assertEqual(u'abc-de', sanitize_filename(u'abc/de')) self.assertEqual('abc_de', sanitize_filename('abc/de'))
self.assertFalse(u'/' in sanitize_filename(u'abc/de///')) self.assertFalse('/' in sanitize_filename('abc/de///'))
self.assertEqual(u'abc-de', sanitize_filename(u'abc/<>\\*|de')) self.assertEqual('abc_de', sanitize_filename('abc/<>\\*|de'))
self.assertEqual(u'xxx', sanitize_filename(u'xxx/<>\\*|')) self.assertEqual('xxx', sanitize_filename('xxx/<>\\*|'))
self.assertEqual(u'yes no', sanitize_filename(u'yes? no')) self.assertEqual('yes no', sanitize_filename('yes? no'))
self.assertEqual(u'this - that', sanitize_filename(u'this: that')) self.assertEqual('this - that', sanitize_filename('this: that'))
self.assertEqual(sanitize_filename(u'ä'), u'ä') self.assertEqual(sanitize_filename('AT&T'), 'AT&T')
self.assertEqual(sanitize_filename(u'кириллица'), u'кириллица') aumlaut = _compat_str('\xe4')
self.assertEqual(sanitize_filename(aumlaut), aumlaut)
tests = _compat_str('\u043a\u0438\u0440\u0438\u043b\u043b\u0438\u0446\u0430')
self.assertEqual(sanitize_filename(tests), tests)
for forbidden in u'"\0\\/': forbidden = '"\0\\/'
self.assertTrue(forbidden not in sanitize_filename(forbidden)) for fc in forbidden:
for fbc in forbidden:
self.assertTrue(fbc not in sanitize_filename(fc))
def test_sanitize_filename_restricted(self):
self.assertEqual(sanitize_filename('abc', restricted=True), 'abc')
self.assertEqual(sanitize_filename('abc_d-e', restricted=True), 'abc_d-e')
self.assertEqual(sanitize_filename('123', restricted=True), '123')
self.assertEqual('abc_de', sanitize_filename('abc/de', restricted=True))
self.assertFalse('/' in sanitize_filename('abc/de///', restricted=True))
self.assertEqual('abc_de', sanitize_filename('abc/<>\\*|de', restricted=True))
self.assertEqual('xxx', sanitize_filename('xxx/<>\\*|', restricted=True))
self.assertEqual('yes_no', sanitize_filename('yes? no', restricted=True))
self.assertEqual('this_-_that', sanitize_filename('this: that', restricted=True))
tests = _compat_str('a\xe4b\u4e2d\u56fd\u7684c')
self.assertEqual(sanitize_filename(tests, restricted=True), 'a_b_c')
self.assertTrue(sanitize_filename(_compat_str('\xf6'), restricted=True) != '') # No empty filename
forbidden = '"\0\\/&!: \'\t\n()[]{}$;`^,#'
for fc in forbidden:
for fbc in forbidden:
self.assertTrue(fbc not in sanitize_filename(fc, restricted=True))
# Handle a common case more neatly
self.assertEqual(sanitize_filename(_compat_str('\u5927\u58f0\u5e26 - Song'), restricted=True), 'Song')
self.assertEqual(sanitize_filename(_compat_str('\u603b\u7edf: Speech'), restricted=True), 'Speech')
# .. but make sure the file name is never empty
self.assertTrue(sanitize_filename('-', restricted=True) != '')
self.assertTrue(sanitize_filename(':', restricted=True) != '')
def test_sanitize_ids(self):
self.assertEqual(sanitize_filename('_n_cd26wFpw', is_id=True), '_n_cd26wFpw')
self.assertEqual(sanitize_filename('_BD_eEpuzXw', is_id=True), '_BD_eEpuzXw')
self.assertEqual(sanitize_filename('N0Y__7-UOdI', is_id=True), 'N0Y__7-UOdI')
def test_ordered_set(self): def test_ordered_set(self):
self.assertEqual(orderedSet([1,1,2,3,4,4,5,6,7,3,5]), [1,2,3,4,5,6,7]) self.assertEqual(orderedSet([1, 1, 2, 3, 4, 4, 5, 6, 7, 3, 5]), [1, 2, 3, 4, 5, 6, 7])
self.assertEqual(orderedSet([]), []) self.assertEqual(orderedSet([]), [])
self.assertEqual(orderedSet([1]), [1]) self.assertEqual(orderedSet([1]), [1])
#keep the list ordered #keep the list ordered
self.assertEqual(orderedSet([135,1,1,1]), [135,1]) self.assertEqual(orderedSet([135, 1, 1, 1]), [135, 1])
def test_unescape_html(self): def test_unescape_html(self):
self.assertEqual(unescapeHTML(u"%20;"), u"%20;") self.assertEqual(unescapeHTML(_compat_str('%20;')), _compat_str('%20;'))
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,77 @@
#!/usr/bin/env python
# coding: utf-8
import json
import os
import sys
import unittest
# Allow direct execution
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import youtube_dl.FileDownloader
import youtube_dl.InfoExtractors
from youtube_dl.utils import *
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), "parameters.json")
# General configuration (from __init__, not very elegant...)
jar = compat_cookiejar.CookieJar()
cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
proxy_handler = compat_urllib_request.ProxyHandler()
opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
compat_urllib_request.install_opener(opener)
class FileDownloader(youtube_dl.FileDownloader):
def __init__(self, *args, **kwargs):
youtube_dl.FileDownloader.__init__(self, *args, **kwargs)
self.to_stderr = self.to_screen
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
params = json.load(pf)
params['writeinfojson'] = True
params['skip_download'] = True
params['writedescription'] = True
TEST_ID = 'BaW_jenozKc'
INFO_JSON_FILE = TEST_ID + '.mp4.info.json'
DESCRIPTION_FILE = TEST_ID + '.mp4.description'
EXPECTED_DESCRIPTION = u'''test chars: "'/\ä↭𝕐
This is a test video for youtube-dl.
For more information, contact phihag@phihag.de .'''
class TestInfoJSON(unittest.TestCase):
def setUp(self):
# Clear old files
self.tearDown()
def test_info_json(self):
ie = youtube_dl.InfoExtractors.YoutubeIE()
fd = FileDownloader(params)
fd.add_info_extractor(ie)
fd.download([TEST_ID])
self.assertTrue(os.path.exists(INFO_JSON_FILE))
with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf:
jd = json.load(jsonf)
self.assertEqual(jd['upload_date'], u'20121002')
self.assertEqual(jd['description'], EXPECTED_DESCRIPTION)
self.assertEqual(jd['id'], TEST_ID)
self.assertEqual(jd['extractor'], 'youtube')
self.assertEqual(jd['title'], u'''youtube-dl test video "'/\ä↭𝕐''')
self.assertEqual(jd['uploader'], 'Philipp Hagemeister')
self.assertTrue(os.path.exists(DESCRIPTION_FILE))
with io.open(DESCRIPTION_FILE, 'r', encoding='utf-8') as descf:
descr = descf.read()
self.assertEqual(descr, EXPECTED_DESCRIPTION)
def tearDown(self):
if os.path.exists(INFO_JSON_FILE):
os.remove(INFO_JSON_FILE)
if os.path.exists(DESCRIPTION_FILE):
os.remove(DESCRIPTION_FILE)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,73 @@
#!/usr/bin/env python
import sys
import unittest
import json
# Allow direct execution
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.InfoExtractors import YoutubeUserIE,YoutubePlaylistIE
from youtube_dl.utils import *
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), "parameters.json")
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf)
# General configuration (from __init__, not very elegant...)
jar = compat_cookiejar.CookieJar()
cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
proxy_handler = compat_urllib_request.ProxyHandler()
opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
compat_urllib_request.install_opener(opener)
class FakeDownloader(object):
def __init__(self):
self.result = []
self.params = parameters
def to_screen(self, s):
print(s)
def trouble(self, s):
raise Exception(s)
def download(self, x):
self.result.append(x)
class TestYoutubeLists(unittest.TestCase):
def test_youtube_playlist(self):
DL = FakeDownloader()
IE = YoutubePlaylistIE(DL)
IE.extract('https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re')
self.assertEqual(DL.result, [
['http://www.youtube.com/watch?v=bV9L5Ht9LgY'],
['http://www.youtube.com/watch?v=FXxLjLQi3Fg'],
['http://www.youtube.com/watch?v=tU3Bgo5qJZE']
])
def test_youtube_playlist_long(self):
DL = FakeDownloader()
IE = YoutubePlaylistIE(DL)
IE.extract('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
self.assertTrue(len(DL.result) >= 799)
def test_youtube_course(self):
DL = FakeDownloader()
IE = YoutubePlaylistIE(DL)
# TODO find a > 100 (paginating?) videos course
IE.extract('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
self.assertEqual(DL.result[0], ['http://www.youtube.com/watch?v=j9WZyLZCBzs'])
self.assertEqual(len(DL.result), 25)
self.assertEqual(DL.result[-1], ['http://www.youtube.com/watch?v=rYefUsYuEp0'])
def test_youtube_channel(self):
# I give up, please find a channel that does paginate and test this like test_youtube_playlist_long
pass # TODO
def test_youtube_user(self):
DL = FakeDownloader()
IE = YoutubeUserIE(DL)
IE.extract('https://www.youtube.com/user/TheLinuxFoundation')
self.assertTrue(len(DL.result) >= 320)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,57 @@
#!/usr/bin/env python
import sys
import unittest
import json
import io
import hashlib
# Allow direct execution
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.InfoExtractors import YoutubeIE
from youtube_dl.utils import *
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), "parameters.json")
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf)
# General configuration (from __init__, not very elegant...)
jar = compat_cookiejar.CookieJar()
cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
proxy_handler = compat_urllib_request.ProxyHandler()
opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
compat_urllib_request.install_opener(opener)
class FakeDownloader(object):
def __init__(self):
self.result = []
self.params = parameters
def to_screen(self, s):
print(s)
def trouble(self, s):
raise Exception(s)
def download(self, x):
self.result.append(x)
md5 = lambda s: hashlib.md5(s.encode('utf-8')).hexdigest()
class TestYoutubeSubtitles(unittest.TestCase):
def test_youtube_subtitles(self):
DL = FakeDownloader()
DL.params['writesubtitles'] = True
IE = YoutubeIE(DL)
info_dict = IE.extract('QRS8MkLhQmM')
self.assertEqual(md5(info_dict[0]['subtitles']), 'c3228550d59116f3c29fba370b55d033')
def test_youtube_subtitles_it(self):
DL = FakeDownloader()
DL.params['writesubtitles'] = True
DL.params['subtitleslang'] = 'it'
IE = YoutubeIE(DL)
info_dict = IE.extract('QRS8MkLhQmM')
self.assertEqual(md5(info_dict[0]['subtitles']), '132a88a0daf8e1520f393eb58f1f646a')
if __name__ == '__main__':
unittest.main()

164
test/tests.json Normal file
View File

@ -0,0 +1,164 @@
[
{
"name": "Youtube",
"url": "http://www.youtube.com/watch?v=BaW_jenozKc",
"file": "BaW_jenozKc.mp4",
"info_dict": {
"title": "youtube-dl test video \"'/\\ä↭𝕐",
"uploader": "Philipp Hagemeister",
"uploader_id": "phihag",
"upload_date": "20121002",
"description": "test chars: \"'/\\ä↭𝕐\n\nThis is a test video for youtube-dl.\n\nFor more information, contact phihag@phihag.de ."
}
},
{
"name": "Dailymotion",
"md5": "392c4b85a60a90dc4792da41ce3144eb",
"url": "http://www.dailymotion.com/video/x33vw9_tutoriel-de-youtubeur-dl-des-video_tech",
"file": "x33vw9.mp4"
},
{
"name": "Metacafe",
"add_ie": ["Youtube"],
"url": "http://metacafe.com/watch/yt-_aUehQsCQtM/the_electric_company_short_i_pbs_kids_go/",
"file": "_aUehQsCQtM.flv"
},
{
"name": "BlipTV",
"md5": "b2d849efcf7ee18917e4b4d9ff37cafe",
"url": "http://blip.tv/cbr/cbr-exclusive-gotham-city-imposters-bats-vs-jokerz-short-3-5796352",
"file": "5779306.m4v"
},
{
"name": "XVideos",
"md5": "1d0c835822f0a71a7bf011855db929d0",
"url": "http://www.xvideos.com/video939581/funny_porns_by_s_-1",
"file": "939581.flv"
},
{
"name": "Vimeo",
"md5": "8879b6cc097e987f02484baf890129e5",
"url": "http://vimeo.com/56015672",
"file": "56015672.mp4",
"info_dict": {
"title": "youtube-dl test video - ★ \" ' 幸 / \\ ä ↭ 𝕐",
"uploader": "Filippo Valsorda",
"uploader_id": "user7108434",
"upload_date": "20121220",
"description": "This is a test case for youtube-dl.\nFor more information, see github.com/rg3/youtube-dl\nTest chars: ★ \" ' 幸 / \\ ä ↭ 𝕐"
}
},
{
"name": "Soundcloud",
"md5": "ebef0a451b909710ed1d7787dddbf0d7",
"url": "http://soundcloud.com/ethmusic/lostin-powers-she-so-heavy",
"file": "62986583.mp3"
},
{
"name": "StanfordOpenClassroom",
"md5": "544a9468546059d4e80d76265b0443b8",
"url": "http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=PracticalUnix&video=intro-environment&speed=100",
"file": "PracticalUnix_intro-environment.mp4"
},
{
"name": "XNXX",
"md5": "0831677e2b4761795f68d417e0b7b445",
"url": "http://video.xnxx.com/video1135332/lida_naked_funny_actress_5_",
"file": "1135332.flv"
},
{
"name": "Youku",
"url": "http://v.youku.com/v_show/id_XNDgyMDQ2NTQw.html",
"file": "XNDgyMDQ2NTQw_part00.flv",
"md5": "ffe3f2e435663dc2d1eea34faeff5b5b",
"params": { "test": false }
},
{
"name": "NBA",
"url": "http://www.nba.com/video/games/nets/2012/12/04/0021200253-okc-bkn-recap.nba/index.html",
"file": "0021200253-okc-bkn-recap.nba.mp4",
"md5": "c0edcfc37607344e2ff8f13c378c88a4"
},
{
"name": "JustinTV",
"url": "http://www.twitch.tv/thegamedevhub/b/296128360",
"file": "296128360.flv",
"md5": "ecaa8a790c22a40770901460af191c9a"
},
{
"name": "MyVideo",
"url": "http://www.myvideo.de/watch/8229274/bowling_fail_or_win",
"file": "8229274.flv",
"md5": "2d2753e8130479ba2cb7e0a37002053e"
},
{
"name": "Escapist",
"url": "http://www.escapistmagazine.com/videos/view/the-escapist-presents/6618-Breaking-Down-Baldurs-Gate",
"file": "6618-Breaking-Down-Baldurs-Gate.flv",
"md5": "c6793dbda81388f4264c1ba18684a74d",
"skip": "Fails with timeout on Travis"
},
{
"name": "GooglePlus",
"url": "https://plus.google.com/u/0/108897254135232129896/posts/ZButuJc6CtH",
"file": "ZButuJc6CtH.flv"
},
{
"name": "FunnyOrDie",
"url": "http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version",
"file": "0732f586d7.mp4",
"md5": "f647e9e90064b53b6e046e75d0241fbd"
},
{
"name": "TweetReel",
"url": "http://tweetreel.com/?77smq",
"file": "77smq.mov",
"md5": "56b4d9ca9de467920f3f99a6d91255d6",
"info_dict": {
"uploader": "itszero",
"uploader_id": "itszero",
"upload_date": "20091225",
"description": "Installing Gentoo Linux on Powerbook G4, it turns out the sleep indicator becomes HDD activity indicator :D"
}
},
{
"name": "Steam",
"url": "http://store.steampowered.com/video/105600/",
"playlist": [
{
"file": "81300.flv",
"md5": "f870007cee7065d7c76b88f0a45ecc07",
"info_dict": {
"title": "Terraria 1.1 Trailer"
}
},
{
"file": "80859.flv",
"md5": "61aaf31a5c5c3041afb58fb83cbb5751",
"info_dict": {
"title": "Terraria Trailer"
}
}
]
},
{
"name": "Ustream",
"url": "http://www.ustream.tv/recorded/20274954",
"file": "20274954.flv",
"md5": "088f151799e8f572f84eb62f17d73e5c",
"info_dict": {
"title": "Young Americans for Liberty February 7, 2012 2:28 AM"
}
},
{
"name": "InfoQ",
"url": "http://www.infoq.com/presentations/A-Few-of-My-Favorite-Python-Things",
"file": "12-jan-pythonthings.mp4",
"info_dict": {
"title": "A Few of My Favorite [Python] Things"
},
"params": {
"skip_download": true
}
}
]

BIN
youtube-dl Executable file

Binary file not shown.

View File

@ -1,14 +0,0 @@
__youtube-dl()
{
local cur prev opts
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts="--all-formats --audio-format --audio-quality --auto-number --batch-file --console-title --continue --cookies --dump-user-agent --extract-audio --format --get-description --get-filename --get-format --get-thumbnail --get-title --get-url --help --id --ignore-errors --keep-video --list-extractors --list-formats --literal --match-title --max-downloads --max-quality --netrc --no-continue --no-mtime --no-overwrites --no-part --no-progress --output --password --playlist-end --playlist-start --prefer-free-formats --quiet --rate-limit --reject-title --retries --simulate --skip-download --srt-lang --title --update --user-agent --username --verbose --version --write-description --write-info-json --write-srt"
if [[ ${cur} == * ]] ; then
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
fi
}
complete -F __youtube-dl youtube-dl

BIN
youtube-dl.exe Normal file

Binary file not shown.

View File

@ -1,20 +1,22 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import httplib from __future__ import absolute_import
import math import math
import io
import os import os
import re import re
import socket import socket
import subprocess import subprocess
import sys import sys
import time import time
import urllib2 import traceback
if os.name == 'nt': if os.name == 'nt':
import ctypes import ctypes
from utils import * from .utils import *
class FileDownloader(object): class FileDownloader(object):
@ -57,10 +59,13 @@ class FileDownloader(object):
format: Video format code. format: Video format code.
format_limit: Highest quality format to try. format_limit: Highest quality format to try.
outtmpl: Template for output names. outtmpl: Template for output names.
restrictfilenames: Do not allow "&" and spaces in file names
ignoreerrors: Do not stop on download errors. ignoreerrors: Do not stop on download errors.
ratelimit: Download speed limit, in bytes/sec. ratelimit: Download speed limit, in bytes/sec.
nooverwrites: Prevent overwriting files. nooverwrites: Prevent overwriting files.
retries: Number of times to retry for HTTP error 5xx retries: Number of times to retry for HTTP error 5xx
buffersize: Size of download buffer in bytes.
noresizebuffer: Do not automatically resize the download buffer.
continuedl: Try to continue downloads if possible. continuedl: Try to continue downloads if possible.
noprogress: Do not print the progress bar. noprogress: Do not print the progress bar.
playliststart: Playlist item to start at. playliststart: Playlist item to start at.
@ -75,6 +80,7 @@ class FileDownloader(object):
writeinfojson: Write the video description to a .info.json file writeinfojson: Write the video description to a .info.json file
writesubtitles: Write the video subtitles to a .srt file writesubtitles: Write the video subtitles to a .srt file
subtitleslang: Language of the subtitles to download subtitleslang: Language of the subtitles to download
test: Download only first bytes to test the downloader.
""" """
params = None params = None
@ -93,6 +99,9 @@ class FileDownloader(object):
self._screen_file = [sys.stdout, sys.stderr][params.get('logtostderr', False)] self._screen_file = [sys.stdout, sys.stderr][params.get('logtostderr', False)]
self.params = params self.params = params
if '%(stitle)s' in self.params['outtmpl']:
self.to_stderr(u'WARNING: %(stitle)s is deprecated. Use the %(title)s and the --restrict-filenames flag(which also secures %(uploader)s et al) instead.')
@staticmethod @staticmethod
def format_bytes(bytes): def format_bytes(bytes):
if bytes is None: if bytes is None:
@ -102,7 +111,7 @@ class FileDownloader(object):
if bytes == 0.0: if bytes == 0.0:
exponent = 0 exponent = 0
else: else:
exponent = long(math.log(bytes, 1024.0)) exponent = int(math.log(bytes, 1024.0))
suffix = 'bkMGTPEZY'[exponent] suffix = 'bkMGTPEZY'[exponent]
converted = float(bytes) / float(1024 ** exponent) converted = float(bytes) / float(1024 ** exponent)
return '%.2f%s' % (converted, suffix) return '%.2f%s' % (converted, suffix)
@ -121,7 +130,7 @@ class FileDownloader(object):
if current == 0 or dif < 0.001: # One millisecond if current == 0 or dif < 0.001: # One millisecond
return '--:--' return '--:--'
rate = float(current) / dif rate = float(current) / dif
eta = long((float(total) - float(current)) / rate) eta = int((float(total) - float(current)) / rate)
(eta_mins, eta_secs) = divmod(eta, 60) (eta_mins, eta_secs) = divmod(eta, 60)
if eta_mins > 99: if eta_mins > 99:
return '--:--' return '--:--'
@ -139,23 +148,23 @@ class FileDownloader(object):
new_min = max(bytes / 2.0, 1.0) new_min = max(bytes / 2.0, 1.0)
new_max = min(max(bytes * 2.0, 1.0), 4194304) # Do not surpass 4 MB new_max = min(max(bytes * 2.0, 1.0), 4194304) # Do not surpass 4 MB
if elapsed_time < 0.001: if elapsed_time < 0.001:
return long(new_max) return int(new_max)
rate = bytes / elapsed_time rate = bytes / elapsed_time
if rate > new_max: if rate > new_max:
return long(new_max) return int(new_max)
if rate < new_min: if rate < new_min:
return long(new_min) return int(new_min)
return long(rate) return int(rate)
@staticmethod @staticmethod
def parse_bytes(bytestr): def parse_bytes(bytestr):
"""Parse a string indicating a byte quantity into a long integer.""" """Parse a string indicating a byte quantity into an integer."""
matchobj = re.match(r'(?i)^(\d+(?:\.\d+)?)([kMGTPEZY]?)$', bytestr) matchobj = re.match(r'(?i)^(\d+(?:\.\d+)?)([kMGTPEZY]?)$', bytestr)
if matchobj is None: if matchobj is None:
return None return None
number = float(matchobj.group(1)) number = float(matchobj.group(1))
multiplier = 1024.0 ** 'bkmgtpezy'.index(matchobj.group(2).lower()) multiplier = 1024.0 ** 'bkmgtpezy'.index(matchobj.group(2).lower())
return long(round(number * multiplier)) return int(round(number * multiplier))
def add_info_extractor(self, ie): def add_info_extractor(self, ie):
"""Add an InfoExtractor object to the end of the list.""" """Add an InfoExtractor object to the end of the list."""
@ -173,14 +182,18 @@ class FileDownloader(object):
if not self.params.get('quiet', False): if not self.params.get('quiet', False):
terminator = [u'\n', u''][skip_eol] terminator = [u'\n', u''][skip_eol]
output = message + terminator output = message + terminator
if 'b' not in self._screen_file.mode or sys.version_info[0] < 3: # Python 2 lies about the mode of sys.stdout/sys.stderr if 'b' in getattr(self._screen_file, 'mode', '') or sys.version_info[0] < 3: # Python 2 lies about the mode of sys.stdout/sys.stderr
output = output.encode(preferredencoding(), 'ignore') output = output.encode(preferredencoding(), 'ignore')
self._screen_file.write(output) self._screen_file.write(output)
self._screen_file.flush() self._screen_file.flush()
def to_stderr(self, message): def to_stderr(self, message):
"""Print message to stderr.""" """Print message to stderr."""
print >>sys.stderr, message.encode(preferredencoding()) assert type(message) == type(u'')
output = message + u'\n'
if 'b' in getattr(self._screen_file, 'mode', '') or sys.version_info[0] < 3: # Python 2 lies about the mode of sys.stdout/sys.stderr
output = output.encode(preferredencoding())
sys.stderr.write(output)
def to_cons_title(self, message): def to_cons_title(self, message):
"""Set console/terminal window title to message.""" """Set console/terminal window title to message."""
@ -195,17 +208,24 @@ class FileDownloader(object):
def fixed_template(self): def fixed_template(self):
"""Checks if the output template is fixed.""" """Checks if the output template is fixed."""
return (re.search(ur'(?u)%\(.+?\)s', self.params['outtmpl']) is None) return (re.search(u'(?u)%\\(.+?\\)s', self.params['outtmpl']) is None)
def trouble(self, message=None): def trouble(self, message=None, tb=None):
"""Determine action to take when a download problem appears. """Determine action to take when a download problem appears.
Depending on if the downloader has been configured to ignore Depending on if the downloader has been configured to ignore
download errors or not, this method may throw an exception or download errors or not, this method may throw an exception or
not when errors are found, after printing the message. not when errors are found, after printing the message.
tb, if given, is additional traceback information.
""" """
if message is not None: if message is not None:
self.to_stderr(message) self.to_stderr(message)
if self.params.get('verbose'):
if tb is None:
tb_data = traceback.format_list(traceback.extract_stack())
tb = u''.join(tb_data)
self.to_stderr(tb)
if not self.params.get('ignoreerrors', False): if not self.params.get('ignoreerrors', False):
raise DownloadError(message) raise DownloadError(message)
self._download_retcode = 1 self._download_retcode = 1
@ -240,7 +260,7 @@ class FileDownloader(object):
if old_filename == new_filename: if old_filename == new_filename:
return return
os.rename(encodeFilename(old_filename), encodeFilename(new_filename)) os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
except (IOError, OSError), err: except (IOError, OSError) as err:
self.trouble(u'ERROR: unable to rename file') self.trouble(u'ERROR: unable to rename file')
def try_utime(self, filename, last_modified_hdr): def try_utime(self, filename, last_modified_hdr):
@ -298,7 +318,7 @@ class FileDownloader(object):
"""Report file has already been fully downloaded.""" """Report file has already been fully downloaded."""
try: try:
self.to_screen(u'[download] %s has already been downloaded' % file_name) self.to_screen(u'[download] %s has already been downloaded' % file_name)
except (UnicodeEncodeError), err: except (UnicodeEncodeError) as err:
self.to_screen(u'[download] The file has already been downloaded') self.to_screen(u'[download] The file has already been downloaded')
def report_unable_to_resume(self): def report_unable_to_resume(self):
@ -320,11 +340,19 @@ class FileDownloader(object):
"""Generate the output filename.""" """Generate the output filename."""
try: try:
template_dict = dict(info_dict) template_dict = dict(info_dict)
template_dict['epoch'] = unicode(long(time.time()))
template_dict['autonumber'] = unicode('%05d' % self._num_downloads) template_dict['epoch'] = int(time.time())
template_dict['autonumber'] = u'%05d' % self._num_downloads
sanitize = lambda k,v: sanitize_filename(
u'NA' if v is None else compat_str(v),
restricted=self.params.get('restrictfilenames'),
is_id=(k==u'id'))
template_dict = dict((k, sanitize(k, v)) for k,v in template_dict.items())
filename = self.params['outtmpl'] % template_dict filename = self.params['outtmpl'] % template_dict
return filename return filename
except (ValueError, KeyError), err: except (ValueError, KeyError) as err:
self.trouble(u'ERROR: invalid system charset or erroneous output template') self.trouble(u'ERROR: invalid system charset or erroneous output template')
return None return None
@ -347,7 +375,11 @@ class FileDownloader(object):
def process_info(self, info_dict): def process_info(self, info_dict):
"""Process a single dictionary returned by an InfoExtractor.""" """Process a single dictionary returned by an InfoExtractor."""
info_dict['stitle'] = sanitize_filename(info_dict['title']) # Keep for backwards compatibility
info_dict['stitle'] = info_dict['title']
if not 'format' in info_dict:
info_dict['format'] = info_dict['ext']
reason = self._match_entry(info_dict) reason = self._match_entry(info_dict)
if reason is not None: if reason is not None:
@ -363,17 +395,17 @@ class FileDownloader(object):
# Forced printings # Forced printings
if self.params.get('forcetitle', False): if self.params.get('forcetitle', False):
print info_dict['title'].encode(preferredencoding(), 'xmlcharrefreplace') compat_print(info_dict['title'])
if self.params.get('forceurl', False): if self.params.get('forceurl', False):
print info_dict['url'].encode(preferredencoding(), 'xmlcharrefreplace') compat_print(info_dict['url'])
if self.params.get('forcethumbnail', False) and 'thumbnail' in info_dict: if self.params.get('forcethumbnail', False) and 'thumbnail' in info_dict:
print info_dict['thumbnail'].encode(preferredencoding(), 'xmlcharrefreplace') compat_print(info_dict['thumbnail'])
if self.params.get('forcedescription', False) and 'description' in info_dict: if self.params.get('forcedescription', False) and 'description' in info_dict:
print info_dict['description'].encode(preferredencoding(), 'xmlcharrefreplace') compat_print(info_dict['description'])
if self.params.get('forcefilename', False) and filename is not None: if self.params.get('forcefilename', False) and filename is not None:
print filename.encode(preferredencoding(), 'xmlcharrefreplace') compat_print(filename)
if self.params.get('forceformat', False): if self.params.get('forceformat', False):
print info_dict['format'].encode(preferredencoding(), 'xmlcharrefreplace') compat_print(info_dict['format'])
# Do nothing else if in simulate mode # Do nothing else if in simulate mode
if self.params.get('simulate', False): if self.params.get('simulate', False):
@ -386,19 +418,16 @@ class FileDownloader(object):
dn = os.path.dirname(encodeFilename(filename)) dn = os.path.dirname(encodeFilename(filename))
if dn != '' and not os.path.exists(dn): # dn is already encoded if dn != '' and not os.path.exists(dn): # dn is already encoded
os.makedirs(dn) os.makedirs(dn)
except (OSError, IOError), err: except (OSError, IOError) as err:
self.trouble(u'ERROR: unable to create directory ' + unicode(err)) self.trouble(u'ERROR: unable to create directory ' + compat_str(err))
return return
if self.params.get('writedescription', False): if self.params.get('writedescription', False):
try: try:
descfn = filename + u'.description' descfn = filename + u'.description'
self.report_writedescription(descfn) self.report_writedescription(descfn)
descfile = open(encodeFilename(descfn), 'wb') with io.open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
try: descfile.write(info_dict['description'])
descfile.write(info_dict['description'].encode('utf-8'))
finally:
descfile.close()
except (OSError, IOError): except (OSError, IOError):
self.trouble(u'ERROR: Cannot write description file ' + descfn) self.trouble(u'ERROR: Cannot write description file ' + descfn)
return return
@ -409,11 +438,8 @@ class FileDownloader(object):
try: try:
srtfn = filename.rsplit('.', 1)[0] + u'.srt' srtfn = filename.rsplit('.', 1)[0] + u'.srt'
self.report_writesubtitles(srtfn) self.report_writesubtitles(srtfn)
srtfile = open(encodeFilename(srtfn), 'wb') with io.open(encodeFilename(srtfn), 'w', encoding='utf-8') as srtfile:
try: srtfile.write(info_dict['subtitles'])
srtfile.write(info_dict['subtitles'].encode('utf-8'))
finally:
srtfile.close()
except (OSError, IOError): except (OSError, IOError):
self.trouble(u'ERROR: Cannot write subtitles file ' + descfn) self.trouble(u'ERROR: Cannot write subtitles file ' + descfn)
return return
@ -422,17 +448,8 @@ class FileDownloader(object):
infofn = filename + u'.info.json' infofn = filename + u'.info.json'
self.report_writeinfojson(infofn) self.report_writeinfojson(infofn)
try: try:
json.dump json_info_dict = dict((k, v) for k,v in info_dict.items() if not k in ['urlhandle'])
except (NameError,AttributeError): write_json_file(json_info_dict, encodeFilename(infofn))
self.trouble(u'ERROR: No JSON encoder found. Update to Python 2.6+, setup a json module, or leave out --write-info-json.')
return
try:
infof = open(encodeFilename(infofn), 'wb')
try:
json_info_dict = dict((k,v) for k,v in info_dict.iteritems() if not k in ('urlhandle',))
json.dump(json_info_dict, infof)
finally:
infof.close()
except (OSError, IOError): except (OSError, IOError):
self.trouble(u'ERROR: Cannot write metadata to JSON file ' + infofn) self.trouble(u'ERROR: Cannot write metadata to JSON file ' + infofn)
return return
@ -443,19 +460,19 @@ class FileDownloader(object):
else: else:
try: try:
success = self._do_download(filename, info_dict) success = self._do_download(filename, info_dict)
except (OSError, IOError), err: except (OSError, IOError) as err:
raise UnavailableVideoError raise UnavailableVideoError()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self.trouble(u'ERROR: unable to download video data: %s' % str(err)) self.trouble(u'ERROR: unable to download video data: %s' % str(err))
return return
except (ContentTooShortError, ), err: except (ContentTooShortError, ) as err:
self.trouble(u'ERROR: content too short (expected %s bytes and served %s)' % (err.expected, err.downloaded)) self.trouble(u'ERROR: content too short (expected %s bytes and served %s)' % (err.expected, err.downloaded))
return return
if success: if success:
try: try:
self.post_process(filename, info_dict) self.post_process(filename, info_dict)
except (PostProcessingError), err: except (PostProcessingError) as err:
self.trouble(u'ERROR: postprocessing: %s' % str(err)) self.trouble(u'ERROR: postprocessing: %s' % str(err))
return return
@ -471,11 +488,30 @@ class FileDownloader(object):
if not ie.suitable(url): if not ie.suitable(url):
continue continue
# Warn if the _WORKING attribute is False
if not ie.working():
self.to_stderr(u'WARNING: the program functionality for this site has been marked as broken, '
u'and will probably not work. If you want to go on, use the -i option.')
# Suitable InfoExtractor found # Suitable InfoExtractor found
suitable_found = True suitable_found = True
# Extract information from URL and process it # Extract information from URL and process it
try:
videos = ie.extract(url) videos = ie.extract(url)
except ExtractorError as de: # An error we somewhat expected
self.trouble(u'ERROR: ' + compat_str(de), de.format_traceback())
break
except Exception as e:
if self.params.get('ignoreerrors', False):
self.trouble(u'ERROR: ' + compat_str(e), tb=compat_str(traceback.format_exc()))
break
else:
raise
if len(videos or []) > 1 and self.fixed_template():
raise SameFileError(self.params['outtmpl'])
for video in videos or []: for video in videos or []:
video['extractor'] = ie.IE_NAME video['extractor'] = ie.IE_NAME
try: try:
@ -501,7 +537,7 @@ class FileDownloader(object):
if info is None: if info is None:
break break
def _download_with_rtmpdump(self, filename, url, player_url): def _download_with_rtmpdump(self, filename, url, player_url, page_url):
self.report_destination(filename) self.report_destination(filename)
tmpfilename = self.temp_name(filename) tmpfilename = self.temp_name(filename)
@ -515,7 +551,11 @@ class FileDownloader(object):
# Download using rtmpdump. rtmpdump returns exit code 2 when # Download using rtmpdump. rtmpdump returns exit code 2 when
# the connection was interrumpted and resuming appears to be # the connection was interrumpted and resuming appears to be
# possible. This is part of rtmpdump's normal usage, AFAIK. # possible. This is part of rtmpdump's normal usage, AFAIK.
basic_args = ['rtmpdump', '-q'] + [[], ['-W', player_url]][player_url is not None] + ['-r', url, '-o', tmpfilename] basic_args = ['rtmpdump', '-q', '-r', url, '-o', tmpfilename]
if player_url is not None:
basic_args += ['-W', player_url]
if page_url is not None:
basic_args += ['--pageUrl', page_url]
args = basic_args + [[], ['-e', '-k', '1']][self.params.get('continuedl', False)] args = basic_args + [[], ['-e', '-k', '1']][self.params.get('continuedl', False)]
if self.params.get('verbose', False): if self.params.get('verbose', False):
try: try:
@ -548,7 +588,6 @@ class FileDownloader(object):
def _do_download(self, filename, info_dict): def _do_download(self, filename, info_dict):
url = info_dict['url'] url = info_dict['url']
player_url = info_dict.get('player_url', None)
# Check file already present # Check file already present
if self.params.get('continuedl', False) and os.path.isfile(encodeFilename(filename)) and not self.params.get('nopart', False): if self.params.get('continuedl', False) and os.path.isfile(encodeFilename(filename)) and not self.params.get('nopart', False):
@ -557,15 +596,20 @@ class FileDownloader(object):
# Attempt to download using rtmpdump # Attempt to download using rtmpdump
if url.startswith('rtmp'): if url.startswith('rtmp'):
return self._download_with_rtmpdump(filename, url, player_url) return self._download_with_rtmpdump(filename, url,
info_dict.get('player_url', None),
info_dict.get('page_url', None))
tmpfilename = self.temp_name(filename) tmpfilename = self.temp_name(filename)
stream = None stream = None
# Do not include the Accept-Encoding header # Do not include the Accept-Encoding header
headers = {'Youtubedl-no-compression': 'True'} headers = {'Youtubedl-no-compression': 'True'}
basic_request = urllib2.Request(url, None, headers) basic_request = compat_urllib_request.Request(url, None, headers)
request = urllib2.Request(url, None, headers) request = compat_urllib_request.Request(url, None, headers)
if self.params.get('test', False):
request.add_header('Range','bytes=0-10240')
# Establish possible resume length # Establish possible resume length
if os.path.isfile(encodeFilename(tmpfilename)): if os.path.isfile(encodeFilename(tmpfilename)):
@ -589,9 +633,9 @@ class FileDownloader(object):
try: try:
if count == 0 and 'urlhandle' in info_dict: if count == 0 and 'urlhandle' in info_dict:
data = info_dict['urlhandle'] data = info_dict['urlhandle']
data = urllib2.urlopen(request) data = compat_urllib_request.urlopen(request)
break break
except (urllib2.HTTPError, ), err: except (compat_urllib_error.HTTPError, ) as err:
if (err.code < 500 or err.code >= 600) and err.code != 416: if (err.code < 500 or err.code >= 600) and err.code != 416:
# Unexpected HTTP error # Unexpected HTTP error
raise raise
@ -599,15 +643,15 @@ class FileDownloader(object):
# Unable to resume (requested range not satisfiable) # Unable to resume (requested range not satisfiable)
try: try:
# Open the connection again without the range header # Open the connection again without the range header
data = urllib2.urlopen(basic_request) data = compat_urllib_request.urlopen(basic_request)
content_length = data.info()['Content-Length'] content_length = data.info()['Content-Length']
except (urllib2.HTTPError, ), err: except (compat_urllib_error.HTTPError, ) as err:
if err.code < 500 or err.code >= 600: if err.code < 500 or err.code >= 600:
raise raise
else: else:
# Examine the reported length # Examine the reported length
if (content_length is not None and if (content_length is not None and
(resume_len - 100 < long(content_length) < resume_len + 100)): (resume_len - 100 < int(content_length) < resume_len + 100)):
# The file had already been fully downloaded. # The file had already been fully downloaded.
# Explanation to the above condition: in issue #175 it was revealed that # Explanation to the above condition: in issue #175 it was revealed that
# YouTube sometimes adds or removes a few bytes from the end of the file, # YouTube sometimes adds or removes a few bytes from the end of the file,
@ -634,10 +678,10 @@ class FileDownloader(object):
data_len = data.info().get('Content-length', None) data_len = data.info().get('Content-length', None)
if data_len is not None: if data_len is not None:
data_len = long(data_len) + resume_len data_len = int(data_len) + resume_len
data_len_str = self.format_bytes(data_len) data_len_str = self.format_bytes(data_len)
byte_counter = 0 + resume_len byte_counter = 0 + resume_len
block_size = 1024 block_size = self.params.get('buffersize', 1024)
start = time.time() start = time.time()
while True: while True:
# Download and write # Download and write
@ -655,14 +699,15 @@ class FileDownloader(object):
assert stream is not None assert stream is not None
filename = self.undo_temp_name(tmpfilename) filename = self.undo_temp_name(tmpfilename)
self.report_destination(filename) self.report_destination(filename)
except (OSError, IOError), err: except (OSError, IOError) as err:
self.trouble(u'ERROR: unable to open for writing: %s' % str(err)) self.trouble(u'ERROR: unable to open for writing: %s' % str(err))
return False return False
try: try:
stream.write(data_block) stream.write(data_block)
except (IOError, OSError), err: except (IOError, OSError) as err:
self.trouble(u'\nERROR: unable to write data: %s' % str(err)) self.trouble(u'\nERROR: unable to write data: %s' % str(err))
return False return False
if not self.params.get('noresizebuffer', False):
block_size = self.best_block_size(after - before, len(data_block)) block_size = self.best_block_size(after - before, len(data_block))
# Progress message # Progress message
@ -683,7 +728,7 @@ class FileDownloader(object):
stream.close() stream.close()
self.report_finish() self.report_finish()
if data_len is not None and byte_counter != data_len: if data_len is not None and byte_counter != data_len:
raise ContentTooShortError(byte_counter, long(data_len)) raise ContentTooShortError(byte_counter, int(data_len))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
# Update file modification time # Update file modification time

1954
youtube_dl/InfoExtractors.py Normal file → Executable file
View File

@ -1,65 +1,70 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
import base64
import datetime import datetime
import HTMLParser
import httplib
import netrc import netrc
import os import os
import re import re
import socket import socket
import time import time
import urllib
import urllib2
import email.utils import email.utils
import xml.etree.ElementTree import xml.etree.ElementTree
import random import random
import math import math
from urlparse import parse_qs, urlparse
try: from .utils import *
import cStringIO as StringIO
except ImportError:
import StringIO
from utils import *
class InfoExtractor(object): class InfoExtractor(object):
"""Information Extractor class. """Information Extractor class.
Information extractors are the classes that, given a URL, extract Information extractors are the classes that, given a URL, extract
information from the video (or videos) the URL refers to. This information about the video (or videos) the URL refers to. This
information includes the real video URL, the video title and simplified information includes the real video URL, the video title, author and
title, author and others. The information is stored in a dictionary others. The information is stored in a dictionary which is then
which is then passed to the FileDownloader. The FileDownloader passed to the FileDownloader. The FileDownloader processes this
processes this information possibly downloading the video to the file information possibly downloading the video to the file system, among
system, among other possible outcomes. The dictionaries must include other possible outcomes.
the following fields:
The dictionaries must include the following fields:
id: Video identifier. id: Video identifier.
url: Final video URL. url: Final video URL.
uploader: Nickname of the video uploader. title: Video title, unescaped.
title: Literal title.
ext: Video filename extension. ext: Video filename extension.
format: Video format. uploader: Full name of the video uploader.
player_url: SWF Player URL (may be None). upload_date: Video upload date (YYYYMMDD).
The following fields are optional. Their primary purpose is to allow The following fields are optional:
youtube-dl to serve as the backend for a video search function, such
as the one in youtube2mp3. They are only used when their respective
forced printing functions are called:
format: The video format, defaults to ext (used for --get-format)
thumbnail: Full URL to a video thumbnail image. thumbnail: Full URL to a video thumbnail image.
description: One-line video description. description: One-line video description.
uploader_id: Nickname or id of the video uploader.
player_url: SWF Player URL (used for rtmpdump).
subtitles: The .srt file contents.
urlhandle: [internal] The urlHandle to be used to download the file,
like returned by urllib.request.urlopen
The fields should all be Unicode strings.
Subclasses of this one should re-define the _real_initialize() and Subclasses of this one should re-define the _real_initialize() and
_real_extract() methods and define a _VALID_URL regexp. _real_extract() methods and define a _VALID_URL regexp.
Probably, they should also be added to the list of extractors. Probably, they should also be added to the list of extractors.
_real_extract() must return a *list* of information dictionaries as
described above.
Finally, the _WORKING attribute should be set to False for broken IEs
in order to warn the users and skip the tests.
""" """
_ready = False _ready = False
_downloader = None _downloader = None
_WORKING = True
def __init__(self, downloader=None): def __init__(self, downloader=None):
"""Constructor. Receives an optional downloader.""" """Constructor. Receives an optional downloader."""
@ -70,6 +75,10 @@ class InfoExtractor(object):
"""Receives a URL and returns True if suitable for this IE.""" """Receives a URL and returns True if suitable for this IE."""
return re.match(self._VALID_URL, url) is not None return re.match(self._VALID_URL, url) is not None
def working(self):
"""Getter method for _WORKING."""
return self._WORKING
def initialize(self): def initialize(self):
"""Initializes an instance (authentication, etc).""" """Initializes an instance (authentication, etc)."""
if not self._ready: if not self._ready:
@ -93,7 +102,22 @@ class InfoExtractor(object):
"""Real extraction process. Redefine in subclasses.""" """Real extraction process. Redefine in subclasses."""
pass pass
@property
def IE_NAME(self):
return type(self).__name__[:-2]
def _download_webpage(self, url_or_request, video_id, note=None, errnote=None):
if note is None:
note = u'Downloading video webpage'
self._downloader.to_screen(u'[%s] %s: %s' % (self.IE_NAME, video_id, note))
try:
urlh = compat_urllib_request.urlopen(url_or_request)
webpage_bytes = urlh.read()
return webpage_bytes.decode('utf-8', 'replace')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
if errnote is None:
errnote = u'Unable to download webpage'
raise ExtractorError(u'%s: %s' % (errnote, compat_str(err)), sys.exc_info()[2])
class YoutubeIE(InfoExtractor): class YoutubeIE(InfoExtractor):
@ -111,7 +135,7 @@ class YoutubeIE(InfoExtractor):
|(?: # or the v= param in all its forms |(?: # or the v= param in all its forms
(?:watch(?:_popup)?(?:\.php)?)? # preceding watch(_popup|.php) or nothing (like /?v=xxxx) (?:watch(?:_popup)?(?:\.php)?)? # preceding watch(_popup|.php) or nothing (like /?v=xxxx)
(?:\?|\#!?) # the params delimiter ? or # or #! (?:\?|\#!?) # the params delimiter ? or # or #!
(?:.+&)? # any other preceding param (like /?s=tuff&v=xxxx) (?:.*?&)? # any other preceding param (like /?s=tuff&v=xxxx)
v= v=
) )
)? # optional -> youtube.com/xxxx is OK )? # optional -> youtube.com/xxxx is OK
@ -214,10 +238,38 @@ class YoutubeIE(InfoExtractor):
srt += caption + '\n\n' srt += caption + '\n\n'
return srt return srt
def _extract_subtitles(self, video_id):
self.report_video_subtitles_download(video_id)
request = compat_urllib_request.Request('http://video.google.com/timedtext?hl=en&type=list&v=%s' % video_id)
try:
srt_list = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
return (u'WARNING: unable to download video subtitles: %s' % compat_str(err), None)
srt_lang_list = re.findall(r'name="([^"]*)"[^>]+lang_code="([\w\-]+)"', srt_list)
srt_lang_list = dict((l[1], l[0]) for l in srt_lang_list)
if not srt_lang_list:
return (u'WARNING: video has no closed captions', None)
if self._downloader.params.get('subtitleslang', False):
srt_lang = self._downloader.params.get('subtitleslang')
elif 'en' in srt_lang_list:
srt_lang = 'en'
else:
srt_lang = list(srt_lang_list.keys())[0]
if not srt_lang in srt_lang_list:
return (u'WARNING: no closed captions found in the specified language', None)
request = compat_urllib_request.Request('http://www.youtube.com/api/timedtext?lang=%s&name=%s&v=%s' % (srt_lang, srt_lang_list[srt_lang], video_id))
try:
srt_xml = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
return (u'WARNING: unable to download video subtitles: %s' % compat_str(err), None)
if not srt_xml:
return (u'WARNING: unable to download video subtitles', None)
return (None, self._closed_captions_xml_to_srt(srt_xml))
def _print_formats(self, formats): def _print_formats(self, formats):
print 'Available formats:' print('Available formats:')
for x in formats: for x in formats:
print '%s\t:\t%s\t[%s]' %(x, self._video_extensions.get(x, 'flv'), self._video_dimensions.get(x, '???')) print('%s\t:\t%s\t[%s]' %(x, self._video_extensions.get(x, 'flv'), self._video_dimensions.get(x, '???')))
def _real_initialize(self): def _real_initialize(self):
if self._downloader is None: if self._downloader is None:
@ -239,17 +291,17 @@ class YoutubeIE(InfoExtractor):
password = info[2] password = info[2]
else: else:
raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE) raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE)
except (IOError, netrc.NetrcParseError), err: except (IOError, netrc.NetrcParseError) as err:
self._downloader.to_stderr(u'WARNING: parsing .netrc: %s' % str(err)) self._downloader.to_stderr(u'WARNING: parsing .netrc: %s' % compat_str(err))
return return
# Set language # Set language
request = urllib2.Request(self._LANG_URL) request = compat_urllib_request.Request(self._LANG_URL)
try: try:
self.report_lang() self.report_lang()
urllib2.urlopen(request).read() compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.to_stderr(u'WARNING: unable to set language: %s' % str(err)) self._downloader.to_stderr(u'WARNING: unable to set language: %s' % compat_str(err))
return return
# No authentication to be performed # No authentication to be performed
@ -264,15 +316,15 @@ class YoutubeIE(InfoExtractor):
'username': username, 'username': username,
'password': password, 'password': password,
} }
request = urllib2.Request(self._LOGIN_URL, urllib.urlencode(login_form)) request = compat_urllib_request.Request(self._LOGIN_URL, compat_urllib_parse.urlencode(login_form))
try: try:
self.report_login() self.report_login()
login_results = urllib2.urlopen(request).read() login_results = compat_urllib_request.urlopen(request).read().decode('utf-8')
if re.search(r'(?i)<form[^>]* name="loginForm"', login_results) is not None: if re.search(r'(?i)<form[^>]* name="loginForm"', login_results) is not None:
self._downloader.to_stderr(u'WARNING: unable to log in: bad username or password') self._downloader.to_stderr(u'WARNING: unable to log in: bad username or password')
return return
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.to_stderr(u'WARNING: unable to log in: %s' % str(err)) self._downloader.to_stderr(u'WARNING: unable to log in: %s' % compat_str(err))
return return
# Confirm age # Confirm age
@ -280,36 +332,41 @@ class YoutubeIE(InfoExtractor):
'next_url': '/', 'next_url': '/',
'action_confirm': 'Confirm', 'action_confirm': 'Confirm',
} }
request = urllib2.Request(self._AGE_URL, urllib.urlencode(age_form)) request = compat_urllib_request.Request(self._AGE_URL, compat_urllib_parse.urlencode(age_form))
try: try:
self.report_age_confirmation() self.report_age_confirmation()
age_results = urllib2.urlopen(request).read() age_results = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to confirm age: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to confirm age: %s' % compat_str(err))
return return
def _real_extract(self, url): def _extract_id(self, url):
# Extract original video URL from URL with redirection, like age verification, using next_url parameter
mobj = re.search(self._NEXT_URL_RE, url)
if mobj:
url = 'http://www.youtube.com/' + urllib.unquote(mobj.group(1)).lstrip('/')
# Extract video id from URL
mobj = re.match(self._VALID_URL, url, re.VERBOSE) mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
video_id = mobj.group(2) video_id = mobj.group(2)
return video_id
def _real_extract(self, url):
# Extract original video URL from URL with redirection, like age verification, using next_url parameter
mobj = re.search(self._NEXT_URL_RE, url)
if mobj:
url = 'http://www.youtube.com/' + compat_urllib_parse.unquote(mobj.group(1)).lstrip('/')
video_id = self._extract_id(url)
# Get video webpage # Get video webpage
self.report_video_webpage_download(video_id) self.report_video_webpage_download(video_id)
request = urllib2.Request('http://www.youtube.com/watch?v=%s&gl=US&hl=en&has_verified=1' % video_id) url = 'http://www.youtube.com/watch?v=%s&gl=US&hl=en&has_verified=1' % video_id
request = compat_urllib_request.Request(url)
try: try:
video_webpage = urllib2.urlopen(request).read() video_webpage_bytes = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % compat_str(err))
return return
video_webpage = video_webpage_bytes.decode('utf-8', 'ignore')
# Attempt to extract SWF player URL # Attempt to extract SWF player URL
mobj = re.search(r'swfConfig.*?"(http:\\/\\/.*?watch.*?-.*?\.swf)"', video_webpage) mobj = re.search(r'swfConfig.*?"(http:\\/\\/.*?watch.*?-.*?\.swf)"', video_webpage)
if mobj is not None: if mobj is not None:
@ -322,18 +379,19 @@ class YoutubeIE(InfoExtractor):
for el_type in ['&el=embedded', '&el=detailpage', '&el=vevo', '']: for el_type in ['&el=embedded', '&el=detailpage', '&el=vevo', '']:
video_info_url = ('http://www.youtube.com/get_video_info?&video_id=%s%s&ps=default&eurl=&gl=US&hl=en' video_info_url = ('http://www.youtube.com/get_video_info?&video_id=%s%s&ps=default&eurl=&gl=US&hl=en'
% (video_id, el_type)) % (video_id, el_type))
request = urllib2.Request(video_info_url) request = compat_urllib_request.Request(video_info_url)
try: try:
video_info_webpage = urllib2.urlopen(request).read() video_info_webpage_bytes = compat_urllib_request.urlopen(request).read()
video_info = parse_qs(video_info_webpage) video_info_webpage = video_info_webpage_bytes.decode('utf-8', 'ignore')
video_info = compat_parse_qs(video_info_webpage)
if 'token' in video_info: if 'token' in video_info:
break break
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video info webpage: %s' % compat_str(err))
return return
if 'token' not in video_info: if 'token' not in video_info:
if 'reason' in video_info: if 'reason' in video_info:
self._downloader.trouble(u'ERROR: YouTube said: %s' % video_info['reason'][0].decode('utf-8')) self._downloader.trouble(u'ERROR: YouTube said: %s' % video_info['reason'][0])
else: else:
self._downloader.trouble(u'ERROR: "token" parameter not in video info for unknown reason') self._downloader.trouble(u'ERROR: "token" parameter not in video info for unknown reason')
return return
@ -348,26 +406,33 @@ class YoutubeIE(InfoExtractor):
# uploader # uploader
if 'author' not in video_info: if 'author' not in video_info:
self._downloader.trouble(u'ERROR: unable to extract uploader nickname') self._downloader.trouble(u'ERROR: unable to extract uploader name')
return return
video_uploader = urllib.unquote_plus(video_info['author'][0]) video_uploader = compat_urllib_parse.unquote_plus(video_info['author'][0])
# uploader_id
video_uploader_id = None
mobj = re.search(r'<link itemprop="url" href="http://www.youtube.com/(?:user|channel)/([^"]+)">', video_webpage)
if mobj is not None:
video_uploader_id = mobj.group(1)
else:
self._downloader.trouble(u'WARNING: unable to extract uploader nickname')
# title # title
if 'title' not in video_info: if 'title' not in video_info:
self._downloader.trouble(u'ERROR: unable to extract video title') self._downloader.trouble(u'ERROR: unable to extract video title')
return return
video_title = urllib.unquote_plus(video_info['title'][0]) video_title = compat_urllib_parse.unquote_plus(video_info['title'][0])
video_title = video_title.decode('utf-8')
# thumbnail image # thumbnail image
if 'thumbnail_url' not in video_info: if 'thumbnail_url' not in video_info:
self._downloader.trouble(u'WARNING: unable to extract video thumbnail') self._downloader.trouble(u'WARNING: unable to extract video thumbnail')
video_thumbnail = '' video_thumbnail = ''
else: # don't panic if we can't find it else: # don't panic if we can't find it
video_thumbnail = urllib.unquote_plus(video_info['thumbnail_url'][0]) video_thumbnail = compat_urllib_parse.unquote_plus(video_info['thumbnail_url'][0])
# upload date # upload date
upload_date = u'NA' upload_date = None
mobj = re.search(r'id="eow-date.*?>(.*?)</span>', video_webpage, re.DOTALL) mobj = re.search(r'id="eow-date.*?>(.*?)</span>', video_webpage, re.DOTALL)
if mobj is not None: if mobj is not None:
upload_date = ' '.join(re.sub(r'[/,-]', r' ', mobj.group(1)).split()) upload_date = ' '.join(re.sub(r'[/,-]', r' ', mobj.group(1)).split())
@ -379,51 +444,27 @@ class YoutubeIE(InfoExtractor):
pass pass
# description # description
video_description = get_element_by_id("eow-description", video_webpage.decode('utf8')) video_description = get_element_by_id("eow-description", video_webpage)
if video_description: video_description = clean_html(video_description) if video_description:
else: video_description = '' video_description = clean_html(video_description)
else:
video_description = ''
# closed captions # closed captions
video_subtitles = None video_subtitles = None
if self._downloader.params.get('writesubtitles', False): if self._downloader.params.get('writesubtitles', False):
try: (srt_error, video_subtitles) = self._extract_subtitles(video_id)
self.report_video_subtitles_download(video_id) if srt_error:
request = urllib2.Request('http://video.google.com/timedtext?hl=en&type=list&v=%s' % video_id) self._downloader.trouble(srt_error)
try:
srt_list = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
raise Trouble(u'WARNING: unable to download video subtitles: %s' % str(err))
srt_lang_list = re.findall(r'name="([^"]*)"[^>]+lang_code="([\w\-]+)"', srt_list)
srt_lang_list = dict((l[1], l[0]) for l in srt_lang_list)
if not srt_lang_list:
raise Trouble(u'WARNING: video has no closed captions')
if self._downloader.params.get('subtitleslang', False):
srt_lang = self._downloader.params.get('subtitleslang')
elif 'en' in srt_lang_list:
srt_lang = 'en'
else:
srt_lang = srt_lang_list.keys()[0]
if not srt_lang in srt_lang_list:
raise Trouble(u'WARNING: no closed captions found in the specified language')
request = urllib2.Request('http://www.youtube.com/api/timedtext?lang=%s&name=%s&v=%s' % (srt_lang, srt_lang_list[srt_lang], video_id))
try:
srt_xml = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
raise Trouble(u'WARNING: unable to download video subtitles: %s' % str(err))
if not srt_xml:
raise Trouble(u'WARNING: unable to download video subtitles')
video_subtitles = self._closed_captions_xml_to_srt(srt_xml.decode('utf-8'))
except Trouble as trouble:
self._downloader.trouble(trouble[0])
if 'length_seconds' not in video_info: if 'length_seconds' not in video_info:
self._downloader.trouble(u'WARNING: unable to extract video duration') self._downloader.trouble(u'WARNING: unable to extract video duration')
video_duration = '' video_duration = ''
else: else:
video_duration = urllib.unquote_plus(video_info['length_seconds'][0]) video_duration = compat_urllib_parse.unquote_plus(video_info['length_seconds'][0])
# token # token
video_token = urllib.unquote_plus(video_info['token'][0]) video_token = compat_urllib_parse.unquote_plus(video_info['token'][0])
# Decide which formats to download # Decide which formats to download
req_format = self._downloader.params.get('format', None) req_format = self._downloader.params.get('format', None)
@ -433,8 +474,8 @@ class YoutubeIE(InfoExtractor):
video_url_list = [(None, video_info['conn'][0])] video_url_list = [(None, video_info['conn'][0])]
elif 'url_encoded_fmt_stream_map' in video_info and len(video_info['url_encoded_fmt_stream_map']) >= 1: elif 'url_encoded_fmt_stream_map' in video_info and len(video_info['url_encoded_fmt_stream_map']) >= 1:
url_data_strs = video_info['url_encoded_fmt_stream_map'][0].split(',') url_data_strs = video_info['url_encoded_fmt_stream_map'][0].split(',')
url_data = [parse_qs(uds) for uds in url_data_strs] url_data = [compat_parse_qs(uds) for uds in url_data_strs]
url_data = filter(lambda ud: 'itag' in ud and 'url' in ud, url_data) url_data = [ud for ud in url_data if 'itag' in ud and 'url' in ud]
url_map = dict((ud['itag'][0], ud['url'][0] + '&signature=' + ud['sig'][0]) for ud in url_data) url_map = dict((ud['itag'][0], ud['url'][0] + '&signature=' + ud['sig'][0]) for ud in url_data)
format_limit = self._downloader.params.get('format_limit', None) format_limit = self._downloader.params.get('format_limit', None)
@ -477,15 +518,19 @@ class YoutubeIE(InfoExtractor):
# Extension # Extension
video_extension = self._video_extensions.get(format_param, 'flv') video_extension = self._video_extensions.get(format_param, 'flv')
video_format = '{0} - {1}'.format(format_param if format_param else video_extension,
self._video_dimensions.get(format_param, '???'))
results.append({ results.append({
'id': video_id.decode('utf-8'), 'id': video_id,
'url': video_real_url.decode('utf-8'), 'url': video_real_url,
'uploader': video_uploader.decode('utf-8'), 'uploader': video_uploader,
'uploader_id': video_uploader_id,
'upload_date': upload_date, 'upload_date': upload_date,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension,
'format': (format_param is None and u'NA' or format_param.decode('utf-8')), 'format': video_format,
'thumbnail': video_thumbnail.decode('utf-8'), 'thumbnail': video_thumbnail,
'description': video_description, 'description': video_description,
'player_url': player_url, 'player_url': player_url,
'subtitles': video_subtitles, 'subtitles': video_subtitles,
@ -523,12 +568,12 @@ class MetacafeIE(InfoExtractor):
def _real_initialize(self): def _real_initialize(self):
# Retrieve disclaimer # Retrieve disclaimer
request = urllib2.Request(self._DISCLAIMER) request = compat_urllib_request.Request(self._DISCLAIMER)
try: try:
self.report_disclaimer() self.report_disclaimer()
disclaimer = urllib2.urlopen(request).read() disclaimer = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to retrieve disclaimer: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to retrieve disclaimer: %s' % compat_str(err))
return return
# Confirm age # Confirm age
@ -536,12 +581,12 @@ class MetacafeIE(InfoExtractor):
'filters': '0', 'filters': '0',
'submit': "Continue - I'm over 18", 'submit': "Continue - I'm over 18",
} }
request = urllib2.Request(self._FILTER_POST, urllib.urlencode(disclaimer_form)) request = compat_urllib_request.Request(self._FILTER_POST, compat_urllib_parse.urlencode(disclaimer_form))
try: try:
self.report_age_confirmation() self.report_age_confirmation()
disclaimer = urllib2.urlopen(request).read() disclaimer = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to confirm age: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to confirm age: %s' % compat_str(err))
return return
def _real_extract(self, url): def _real_extract(self, url):
@ -560,19 +605,19 @@ class MetacafeIE(InfoExtractor):
return return
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
request = urllib2.Request('http://www.metacafe.com/watch/%s/' % video_id) request = compat_urllib_request.Request('http://www.metacafe.com/watch/%s/' % video_id)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable retrieve video webpage: %s' % compat_str(err))
return return
# Extract URL, uploader and title from webpage # Extract URL, uploader and title from webpage
self.report_extraction(video_id) self.report_extraction(video_id)
mobj = re.search(r'(?m)&mediaURL=([^&]+)', webpage) mobj = re.search(r'(?m)&mediaURL=([^&]+)', webpage)
if mobj is not None: if mobj is not None:
mediaURL = urllib.unquote(mobj.group(1)) mediaURL = compat_urllib_parse.unquote(mobj.group(1))
video_extension = mediaURL[-3:] video_extension = mediaURL[-3:]
# Extract gdaKey if available # Extract gdaKey if available
@ -587,7 +632,7 @@ class MetacafeIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract media URL') self._downloader.trouble(u'ERROR: unable to extract media URL')
return return
vardict = parse_qs(mobj.group(1)) vardict = compat_parse_qs(mobj.group(1))
if 'mediaData' not in vardict: if 'mediaData' not in vardict:
self._downloader.trouble(u'ERROR: unable to extract media URL') self._downloader.trouble(u'ERROR: unable to extract media URL')
return return
@ -615,11 +660,9 @@ class MetacafeIE(InfoExtractor):
'id': video_id.decode('utf-8'), 'id': video_id.decode('utf-8'),
'url': video_url.decode('utf-8'), 'url': video_url.decode('utf-8'),
'uploader': video_uploader.decode('utf-8'), 'uploader': video_uploader.decode('utf-8'),
'upload_date': u'NA', 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension.decode('utf-8'),
'format': u'NA',
'player_url': None,
}] }]
@ -632,10 +675,6 @@ class DailymotionIE(InfoExtractor):
def __init__(self, downloader=None): def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader) InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, video_id):
"""Report webpage download."""
self._downloader.to_screen(u'[dailymotion] %s: Downloading webpage' % video_id)
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[dailymotion] %s: Extracting information' % video_id) self._downloader.to_screen(u'[dailymotion] %s: Extracting information' % video_id)
@ -652,14 +691,9 @@ class DailymotionIE(InfoExtractor):
video_extension = 'mp4' video_extension = 'mp4'
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
request.add_header('Cookie', 'family_filter=off') request.add_header('Cookie', 'family_filter=off')
try: webpage = self._download_webpage(request, video_id)
self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable retrieve video webpage: %s' % str(err))
return
# Extract URL, uploader and title from webpage # Extract URL, uploader and title from webpage
self.report_extraction(video_id) self.report_extraction(video_id)
@ -667,7 +701,7 @@ class DailymotionIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract media URL') self._downloader.trouble(u'ERROR: unable to extract media URL')
return return
flashvars = urllib.unquote(mobj.group(1)) flashvars = compat_urllib_parse.unquote(mobj.group(1))
for key in ['hd1080URL', 'hd720URL', 'hqURL', 'sdURL', 'ldURL', 'video_url']: for key in ['hd1080URL', 'hd720URL', 'hqURL', 'sdURL', 'ldURL', 'video_url']:
if key in flashvars: if key in flashvars:
@ -683,7 +717,7 @@ class DailymotionIE(InfoExtractor):
self._downloader.trouble(u'ERROR: unable to extract video URL') self._downloader.trouble(u'ERROR: unable to extract video URL')
return return
video_url = urllib.unquote(mobj.group(1)).replace('\\/', '/') video_url = compat_urllib_parse.unquote(mobj.group(1)).replace('\\/', '/')
# TODO: support choosing qualities # TODO: support choosing qualities
@ -691,9 +725,9 @@ class DailymotionIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract title') self._downloader.trouble(u'ERROR: unable to extract title')
return return
video_title = unescapeHTML(mobj.group('title').decode('utf-8')) video_title = unescapeHTML(mobj.group('title'))
video_uploader = u'NA' video_uploader = None
mobj = re.search(r'(?im)<span class="owner[^\"]+?">[^<]+?<a [^>]+?>([^<]+?)</a>', webpage) mobj = re.search(r'(?im)<span class="owner[^\"]+?">[^<]+?<a [^>]+?>([^<]+?)</a>', webpage)
if mobj is None: if mobj is None:
# lookin for official user # lookin for official user
@ -705,115 +739,18 @@ class DailymotionIE(InfoExtractor):
else: else:
video_uploader = mobj.group(1) video_uploader = mobj.group(1)
video_upload_date = u'NA' video_upload_date = None
mobj = re.search(r'<div class="[^"]*uploaded_cont[^"]*" title="[^"]*">([0-9]{2})-([0-9]{2})-([0-9]{4})</div>', webpage) mobj = re.search(r'<div class="[^"]*uploaded_cont[^"]*" title="[^"]*">([0-9]{2})-([0-9]{2})-([0-9]{4})</div>', webpage)
if mobj is not None: if mobj is not None:
video_upload_date = mobj.group(3) + mobj.group(2) + mobj.group(1) video_upload_date = mobj.group(3) + mobj.group(2) + mobj.group(1)
return [{ return [{
'id': video_id.decode('utf-8'), 'id': video_id,
'url': video_url.decode('utf-8'), 'url': video_url,
'uploader': video_uploader.decode('utf-8'), 'uploader': video_uploader,
'upload_date': video_upload_date, 'upload_date': video_upload_date,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension,
'format': u'NA',
'player_url': None,
}]
class GoogleIE(InfoExtractor):
"""Information extractor for video.google.com."""
_VALID_URL = r'(?:http://)?video\.google\.(?:com(?:\.au)?|co\.(?:uk|jp|kr|cr)|ca|de|es|fr|it|nl|pl)/videoplay\?docid=([^\&]+).*'
IE_NAME = u'video.google'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, video_id):
"""Report webpage download."""
self._downloader.to_screen(u'[video.google] %s: Downloading webpage' % video_id)
def report_extraction(self, video_id):
"""Report information extraction."""
self._downloader.to_screen(u'[video.google] %s: Extracting information' % video_id)
def _real_extract(self, url):
# Extract id from URL
mobj = re.match(self._VALID_URL, url)
if mobj is None:
self._downloader.trouble(u'ERROR: Invalid URL: %s' % url)
return
video_id = mobj.group(1)
video_extension = 'mp4'
# Retrieve video webpage to extract further information
request = urllib2.Request('http://video.google.com/videoplay?docid=%s&hl=en&oe=utf-8' % video_id)
try:
self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err))
return
# Extract URL, uploader, and title from webpage
self.report_extraction(video_id)
mobj = re.search(r"download_url:'([^']+)'", webpage)
if mobj is None:
video_extension = 'flv'
mobj = re.search(r"(?i)videoUrl\\x3d(.+?)\\x26", webpage)
if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract media URL')
return
mediaURL = urllib.unquote(mobj.group(1))
mediaURL = mediaURL.replace('\\x3d', '\x3d')
mediaURL = mediaURL.replace('\\x26', '\x26')
video_url = mediaURL
mobj = re.search(r'<title>(.*)</title>', webpage)
if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract title')
return
video_title = mobj.group(1).decode('utf-8')
# Extract video description
mobj = re.search(r'<span id=short-desc-content>([^<]*)</span>', webpage)
if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video description')
return
video_description = mobj.group(1).decode('utf-8')
if not video_description:
video_description = 'No description available.'
# Extract video thumbnail
if self._downloader.params.get('forcethumbnail', False):
request = urllib2.Request('http://video.google.com/videosearch?q=%s+site:video.google.com&hl=en' % abs(int(video_id)))
try:
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err))
return
mobj = re.search(r'<img class=thumbnail-img (?:.* )?src=(http.*)>', webpage)
if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video thumbnail')
return
video_thumbnail = mobj.group(1)
else: # we need something to pass to process_info
video_thumbnail = ''
return [{
'id': video_id.decode('utf-8'),
'url': video_url.decode('utf-8'),
'uploader': u'NA',
'upload_date': u'NA',
'title': video_title,
'ext': video_extension.decode('utf-8'),
'format': u'NA',
'player_url': None,
}] }]
@ -846,12 +783,12 @@ class PhotobucketIE(InfoExtractor):
video_extension = 'flv' video_extension = 'flv'
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
# Extract URL, uploader, and title from webpage # Extract URL, uploader, and title from webpage
@ -860,7 +797,7 @@ class PhotobucketIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract media URL') self._downloader.trouble(u'ERROR: unable to extract media URL')
return return
mediaURL = urllib.unquote(mobj.group(1)) mediaURL = compat_urllib_parse.unquote(mobj.group(1))
video_url = mediaURL video_url = mediaURL
@ -876,17 +813,16 @@ class PhotobucketIE(InfoExtractor):
'id': video_id.decode('utf-8'), 'id': video_id.decode('utf-8'),
'url': video_url.decode('utf-8'), 'url': video_url.decode('utf-8'),
'uploader': video_uploader, 'uploader': video_uploader,
'upload_date': u'NA', 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension.decode('utf-8'),
'format': u'NA',
'player_url': None,
}] }]
class YahooIE(InfoExtractor): class YahooIE(InfoExtractor):
"""Information extractor for video.yahoo.com.""" """Information extractor for video.yahoo.com."""
_WORKING = False
# _VALID_URL matches all Yahoo! Video URLs # _VALID_URL matches all Yahoo! Video URLs
# _VPAGE_URL matches only the extractable '/watch/' URLs # _VPAGE_URL matches only the extractable '/watch/' URLs
_VALID_URL = r'(?:http://)?(?:[a-z]+\.)?video\.yahoo\.com/(?:watch|network)/([0-9]+)(?:/|\?v=)([0-9]+)(?:[#\?].*)?' _VALID_URL = r'(?:http://)?(?:[a-z]+\.)?video\.yahoo\.com/(?:watch|network)/([0-9]+)(?:/|\?v=)([0-9]+)(?:[#\?].*)?'
@ -917,11 +853,11 @@ class YahooIE(InfoExtractor):
# Rewrite valid but non-extractable URLs as # Rewrite valid but non-extractable URLs as
# extractable English language /watch/ URLs # extractable English language /watch/ URLs
if re.match(self._VPAGE_URL, url) is None: if re.match(self._VPAGE_URL, url) is None:
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
mobj = re.search(r'\("id", "([0-9]+)"\);', webpage) mobj = re.search(r'\("id", "([0-9]+)"\);', webpage)
@ -940,12 +876,12 @@ class YahooIE(InfoExtractor):
return self._real_extract(url, new_video=False) return self._real_extract(url, new_video=False)
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
# Extract uploader and title from webpage # Extract uploader and title from webpage
@ -996,14 +932,14 @@ class YahooIE(InfoExtractor):
# seem to need most of them, otherwise the server sends a 401. # seem to need most of them, otherwise the server sends a 401.
yv_lg = 'R0xx6idZnW2zlrKP8xxAIR' # not sure what this represents yv_lg = 'R0xx6idZnW2zlrKP8xxAIR' # not sure what this represents
yv_bitrate = '700' # according to Wikipedia this is hard-coded yv_bitrate = '700' # according to Wikipedia this is hard-coded
request = urllib2.Request('http://cosmos.bcst.yahoo.com/up/yep/process/getPlaylistFOP.php?node_id=' + video_id + request = compat_urllib_request.Request('http://cosmos.bcst.yahoo.com/up/yep/process/getPlaylistFOP.php?node_id=' + video_id +
'&tech=flash&mode=playlist&lg=' + yv_lg + '&bitrate=' + yv_bitrate + '&vidH=' + yv_video_height + '&tech=flash&mode=playlist&lg=' + yv_lg + '&bitrate=' + yv_bitrate + '&vidH=' + yv_video_height +
'&vidW=' + yv_video_width + '&swf=as3&rd=video.yahoo.com&tk=null&adsupported=v1,v2,&eventid=1301797') '&vidW=' + yv_video_width + '&swf=as3&rd=video.yahoo.com&tk=null&adsupported=v1,v2,&eventid=1301797')
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
# Extract media URL from playlist XML # Extract media URL from playlist XML
@ -1011,20 +947,18 @@ class YahooIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: Unable to extract media URL') self._downloader.trouble(u'ERROR: Unable to extract media URL')
return return
video_url = urllib.unquote(mobj.group(1) + mobj.group(2)).decode('utf-8') video_url = compat_urllib_parse.unquote(mobj.group(1) + mobj.group(2)).decode('utf-8')
video_url = unescapeHTML(video_url) video_url = unescapeHTML(video_url)
return [{ return [{
'id': video_id.decode('utf-8'), 'id': video_id.decode('utf-8'),
'url': video_url, 'url': video_url,
'uploader': video_uploader, 'uploader': video_uploader,
'upload_date': u'NA', 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension.decode('utf-8'),
'thumbnail': video_thumbnail.decode('utf-8'), 'thumbnail': video_thumbnail.decode('utf-8'),
'description': video_description, 'description': video_description,
'thumbnail': video_thumbnail,
'player_url': None,
}] }]
@ -1032,7 +966,7 @@ class VimeoIE(InfoExtractor):
"""Information extractor for vimeo.com.""" """Information extractor for vimeo.com."""
# _VALID_URL matches Vimeo URLs # _VALID_URL matches Vimeo URLs
_VALID_URL = r'(?:https?://)?(?:(?:www|player).)?vimeo\.com/(?:groups/[^/]+/)?(?:videos?/)?([0-9]+)' _VALID_URL = r'(?:https?://)?(?:(?:www|player).)?vimeo\.com/(?:(?:groups|album)/[^/]+/)?(?:videos?/)?([0-9]+)'
IE_NAME = u'vimeo' IE_NAME = u'vimeo'
def __init__(self, downloader=None): def __init__(self, downloader=None):
@ -1056,12 +990,13 @@ class VimeoIE(InfoExtractor):
video_id = mobj.group(1) video_id = mobj.group(1)
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
request = urllib2.Request(url, None, std_headers) request = compat_urllib_request.Request(url, None, std_headers)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage_bytes = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: webpage = webpage_bytes.decode('utf-8')
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
# Now we begin extracting as much information as we can from what we # Now we begin extracting as much information as we can from what we
@ -1070,8 +1005,8 @@ class VimeoIE(InfoExtractor):
self.report_extraction(video_id) self.report_extraction(video_id)
# Extract the config JSON # Extract the config JSON
config = webpage.split(' = {config:')[1].split(',assets:')[0]
try: try:
config = webpage.split(' = {config:')[1].split(',assets:')[0]
config = json.loads(config) config = json.loads(config)
except: except:
self._downloader.trouble(u'ERROR: unable to extract info section') self._downloader.trouble(u'ERROR: unable to extract info section')
@ -1080,57 +1015,205 @@ class VimeoIE(InfoExtractor):
# Extract title # Extract title
video_title = config["video"]["title"] video_title = config["video"]["title"]
# Extract uploader # Extract uploader and uploader_id
video_uploader = config["video"]["owner"]["name"] video_uploader = config["video"]["owner"]["name"]
video_uploader_id = config["video"]["owner"]["url"].split('/')[-1]
# Extract video thumbnail # Extract video thumbnail
video_thumbnail = config["video"]["thumbnail"] video_thumbnail = config["video"]["thumbnail"]
# Extract video description # Extract video description
video_description = get_element_by_id("description", webpage.decode('utf8')) video_description = get_element_by_attribute("itemprop", "description", webpage)
if video_description: video_description = clean_html(video_description) if video_description: video_description = clean_html(video_description)
else: video_description = '' else: video_description = ''
# Extract upload date # Extract upload date
video_upload_date = u'NA' video_upload_date = None
mobj = re.search(r'<span id="clip-date" style="display:none">[^:]*: (.*?)( \([^\(]*\))?</span>', webpage) mobj = re.search(r'<meta itemprop="dateCreated" content="(\d{4})-(\d{2})-(\d{2})T', webpage)
if mobj is not None: if mobj is not None:
video_upload_date = mobj.group(1) video_upload_date = mobj.group(1) + mobj.group(2) + mobj.group(3)
# Vimeo specific: extract request signature and timestamp # Vimeo specific: extract request signature and timestamp
sig = config['request']['signature'] sig = config['request']['signature']
timestamp = config['request']['timestamp'] timestamp = config['request']['timestamp']
# Vimeo specific: extract video codec and quality information # Vimeo specific: extract video codec and quality information
# First consider quality, then codecs, then take everything
# TODO bind to format param # TODO bind to format param
codecs = [('h264', 'mp4'), ('vp8', 'flv'), ('vp6', 'flv')] codecs = [('h264', 'mp4'), ('vp8', 'flv'), ('vp6', 'flv')]
for codec in codecs: files = { 'hd': [], 'sd': [], 'other': []}
if codec[0] in config["video"]["files"]: for codec_name, codec_extension in codecs:
video_codec = codec[0] if codec_name in config["video"]["files"]:
video_extension = codec[1] if 'hd' in config["video"]["files"][codec_name]:
if 'hd' in config["video"]["files"][codec[0]]: quality = 'hd' files['hd'].append((codec_name, codec_extension, 'hd'))
else: quality = 'sd' elif 'sd' in config["video"]["files"][codec_name]:
files['sd'].append((codec_name, codec_extension, 'sd'))
else:
files['other'].append((codec_name, codec_extension, config["video"]["files"][codec_name][0]))
for quality in ('hd', 'sd', 'other'):
if len(files[quality]) > 0:
video_quality = files[quality][0][2]
video_codec = files[quality][0][0]
video_extension = files[quality][0][1]
self._downloader.to_screen(u'[vimeo] %s: Downloading %s file at %s quality' % (video_id, video_codec.upper(), video_quality))
break break
else: else:
self._downloader.trouble(u'ERROR: no known codec found') self._downloader.trouble(u'ERROR: no known codec found')
return return
video_url = "http://player.vimeo.com/play_redirect?clip_id=%s&sig=%s&time=%s&quality=%s&codecs=%s&type=moogaloop_local&embed_location=" \ video_url = "http://player.vimeo.com/play_redirect?clip_id=%s&sig=%s&time=%s&quality=%s&codecs=%s&type=moogaloop_local&embed_location=" \
%(video_id, sig, timestamp, quality, video_codec.upper()) %(video_id, sig, timestamp, video_quality, video_codec.upper())
return [{ return [{
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'uploader': video_uploader, 'uploader': video_uploader,
'uploader_id': video_uploader_id,
'upload_date': video_upload_date, 'upload_date': video_upload_date,
'title': video_title, 'title': video_title,
'ext': video_extension, 'ext': video_extension,
'thumbnail': video_thumbnail, 'thumbnail': video_thumbnail,
'description': video_description, 'description': video_description,
'player_url': None,
}] }]
class ArteTvIE(InfoExtractor):
"""arte.tv information extractor."""
_VALID_URL = r'(?:http://)?videos\.arte\.tv/(?:fr|de)/videos/.*'
_LIVE_URL = r'index-[0-9]+\.html$'
IE_NAME = u'arte.tv'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, video_id):
"""Report webpage download."""
self._downloader.to_screen(u'[arte.tv] %s: Downloading webpage' % video_id)
def report_extraction(self, video_id):
"""Report information extraction."""
self._downloader.to_screen(u'[arte.tv] %s: Extracting information' % video_id)
def fetch_webpage(self, url):
request = compat_urllib_request.Request(url)
try:
self.report_download_webpage(url)
webpage = compat_urllib_request.urlopen(request).read()
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return
except ValueError as err:
self._downloader.trouble(u'ERROR: Invalid URL: %s' % url)
return
return webpage
def grep_webpage(self, url, regex, regexFlags, matchTuples):
page = self.fetch_webpage(url)
mobj = re.search(regex, page, regexFlags)
info = {}
if mobj is None:
self._downloader.trouble(u'ERROR: Invalid URL: %s' % url)
return
for (i, key, err) in matchTuples:
if mobj.group(i) is None:
self._downloader.trouble(err)
return
else:
info[key] = mobj.group(i)
return info
def extractLiveStream(self, url):
video_lang = url.split('/')[-4]
info = self.grep_webpage(
url,
r'src="(.*?/videothek_js.*?\.js)',
0,
[
(1, 'url', u'ERROR: Invalid URL: %s' % url)
]
)
http_host = url.split('/')[2]
next_url = 'http://%s%s' % (http_host, compat_urllib_parse.unquote(info.get('url')))
info = self.grep_webpage(
next_url,
r'(s_artestras_scst_geoFRDE_' + video_lang + '.*?)\'.*?' +
'(http://.*?\.swf).*?' +
'(rtmp://.*?)\'',
re.DOTALL,
[
(1, 'path', u'ERROR: could not extract video path: %s' % url),
(2, 'player', u'ERROR: could not extract video player: %s' % url),
(3, 'url', u'ERROR: could not extract video url: %s' % url)
]
)
video_url = u'%s/%s' % (info.get('url'), info.get('path'))
def extractPlus7Stream(self, url):
video_lang = url.split('/')[-3]
info = self.grep_webpage(
url,
r'param name="movie".*?videorefFileUrl=(http[^\'"&]*)',
0,
[
(1, 'url', u'ERROR: Invalid URL: %s' % url)
]
)
next_url = compat_urllib_parse.unquote(info.get('url'))
info = self.grep_webpage(
next_url,
r'<video lang="%s" ref="(http[^\'"&]*)' % video_lang,
0,
[
(1, 'url', u'ERROR: Could not find <video> tag: %s' % url)
]
)
next_url = compat_urllib_parse.unquote(info.get('url'))
info = self.grep_webpage(
next_url,
r'<video id="(.*?)".*?>.*?' +
'<name>(.*?)</name>.*?' +
'<dateVideo>(.*?)</dateVideo>.*?' +
'<url quality="hd">(.*?)</url>',
re.DOTALL,
[
(1, 'id', u'ERROR: could not extract video id: %s' % url),
(2, 'title', u'ERROR: could not extract video title: %s' % url),
(3, 'date', u'ERROR: could not extract video date: %s' % url),
(4, 'url', u'ERROR: could not extract video url: %s' % url)
]
)
return {
'id': info.get('id'),
'url': compat_urllib_parse.unquote(info.get('url')),
'uploader': u'arte.tv',
'upload_date': info.get('date'),
'title': info.get('title').decode('utf-8'),
'ext': u'mp4',
'format': u'NA',
'player_url': None,
}
def _real_extract(self, url):
video_id = url.split('/')[-1]
self.report_extraction(video_id)
if re.search(self._LIVE_URL, video_id) is not None:
self.extractLiveStream(url)
return
else:
info = self.extractPlus7Stream(url)
return [info]
class GenericIE(InfoExtractor): class GenericIE(InfoExtractor):
"""Generic last-resort information extractor.""" """Generic last-resort information extractor."""
@ -1155,11 +1238,11 @@ class GenericIE(InfoExtractor):
def _test_redirect(self, url): def _test_redirect(self, url):
"""Check if it is a redirect, like url shorteners, in case restart chain.""" """Check if it is a redirect, like url shorteners, in case restart chain."""
class HeadRequest(urllib2.Request): class HeadRequest(compat_urllib_request.Request):
def get_method(self): def get_method(self):
return "HEAD" return "HEAD"
class HEADRedirectHandler(urllib2.HTTPRedirectHandler): class HEADRedirectHandler(compat_urllib_request.HTTPRedirectHandler):
""" """
Subclass the HTTPRedirectHandler to make it use our Subclass the HTTPRedirectHandler to make it use our
HeadRequest also on the redirected URL HeadRequest also on the redirected URL
@ -1174,9 +1257,9 @@ class GenericIE(InfoExtractor):
origin_req_host=req.get_origin_req_host(), origin_req_host=req.get_origin_req_host(),
unverifiable=True) unverifiable=True)
else: else:
raise urllib2.HTTPError(req.get_full_url(), code, msg, headers, fp) raise compat_urllib_error.HTTPError(req.get_full_url(), code, msg, headers, fp)
class HTTPMethodFallback(urllib2.BaseHandler): class HTTPMethodFallback(compat_urllib_request.BaseHandler):
""" """
Fallback to GET if HEAD is not allowed (405 HTTP error) Fallback to GET if HEAD is not allowed (405 HTTP error)
""" """
@ -1186,22 +1269,23 @@ class GenericIE(InfoExtractor):
newheaders = dict((k,v) for k,v in req.headers.items() newheaders = dict((k,v) for k,v in req.headers.items()
if k.lower() not in ("content-length", "content-type")) if k.lower() not in ("content-length", "content-type"))
return self.parent.open(urllib2.Request(req.get_full_url(), return self.parent.open(compat_urllib_request.Request(req.get_full_url(),
headers=newheaders, headers=newheaders,
origin_req_host=req.get_origin_req_host(), origin_req_host=req.get_origin_req_host(),
unverifiable=True)) unverifiable=True))
# Build our opener # Build our opener
opener = urllib2.OpenerDirector() opener = compat_urllib_request.OpenerDirector()
for handler in [urllib2.HTTPHandler, urllib2.HTTPDefaultErrorHandler, for handler in [compat_urllib_request.HTTPHandler, compat_urllib_request.HTTPDefaultErrorHandler,
HTTPMethodFallback, HEADRedirectHandler, HTTPMethodFallback, HEADRedirectHandler,
urllib2.HTTPErrorProcessor, urllib2.HTTPSHandler]: compat_urllib_error.HTTPErrorProcessor, compat_urllib_request.HTTPSHandler]:
opener.add_handler(handler()) opener.add_handler(handler())
response = opener.open(HeadRequest(url)) response = opener.open(HeadRequest(url))
new_url = response.geturl() new_url = response.geturl()
if url == new_url: return False if url == new_url:
return False
self.report_following_redirect(new_url) self.report_following_redirect(new_url)
self._downloader.download([new_url]) self._downloader.download([new_url])
@ -1211,14 +1295,14 @@ class GenericIE(InfoExtractor):
if self._test_redirect(url): return if self._test_redirect(url): return
video_id = url.split('/')[-1] video_id = url.split('/')[-1]
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
except ValueError, err: except ValueError as err:
# since this is the last-resort InfoExtractor, if # since this is the last-resort InfoExtractor, if
# this error is thrown, it'll be thrown here # this error is thrown, it'll be thrown here
self._downloader.trouble(u'ERROR: Invalid URL: %s' % url) self._downloader.trouble(u'ERROR: Invalid URL: %s' % url)
@ -1240,7 +1324,7 @@ class GenericIE(InfoExtractor):
self._downloader.trouble(u'ERROR: Invalid URL: %s' % url) self._downloader.trouble(u'ERROR: Invalid URL: %s' % url)
return return
video_url = urllib.unquote(mobj.group(1)) video_url = compat_urllib_parse.unquote(mobj.group(1))
video_id = os.path.basename(video_url) video_id = os.path.basename(video_url)
# here's a fun little line of code for you: # here's a fun little line of code for you:
@ -1257,24 +1341,22 @@ class GenericIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract title') self._downloader.trouble(u'ERROR: unable to extract title')
return return
video_title = mobj.group(1).decode('utf-8') video_title = mobj.group(1)
# video uploader is domain name # video uploader is domain name
mobj = re.match(r'(?:https?://)?([^/]*)/.*', url) mobj = re.match(r'(?:https?://)?([^/]*)/.*', url)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract title') self._downloader.trouble(u'ERROR: unable to extract title')
return return
video_uploader = mobj.group(1).decode('utf-8') video_uploader = mobj.group(1)
return [{ return [{
'id': video_id.decode('utf-8'), 'id': video_id,
'url': video_url.decode('utf-8'), 'url': video_url,
'uploader': video_uploader, 'uploader': video_uploader,
'upload_date': u'NA', 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': video_extension.decode('utf-8'), 'ext': video_extension,
'format': u'NA',
'player_url': None,
}] }]
@ -1310,7 +1392,7 @@ class YoutubeSearchIE(InfoExtractor):
return return
else: else:
try: try:
n = long(prefix) n = int(prefix)
if n <= 0: if n <= 0:
self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query)) self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query))
return return
@ -1332,12 +1414,12 @@ class YoutubeSearchIE(InfoExtractor):
while (50 * pagenum) < limit: while (50 * pagenum) < limit:
self.report_download_page(query, pagenum+1) self.report_download_page(query, pagenum+1)
result_url = self._API_URL % (urllib.quote_plus(query), (50*pagenum)+1) result_url = self._API_URL % (compat_urllib_parse.quote_plus(query), (50*pagenum)+1)
request = urllib2.Request(result_url) request = compat_urllib_request.Request(result_url)
try: try:
data = urllib2.urlopen(request).read() data = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download API page: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download API page: %s' % compat_str(err))
return return
api_response = json.loads(data)['data'] api_response = json.loads(data)['data']
@ -1388,7 +1470,7 @@ class GoogleSearchIE(InfoExtractor):
return return
else: else:
try: try:
n = long(prefix) n = int(prefix)
if n <= 0: if n <= 0:
self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query)) self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query))
return return
@ -1409,12 +1491,12 @@ class GoogleSearchIE(InfoExtractor):
while True: while True:
self.report_download_page(query, pagenum) self.report_download_page(query, pagenum)
result_url = self._TEMPLATE_URL % (urllib.quote_plus(query), pagenum*10) result_url = self._TEMPLATE_URL % (compat_urllib_parse.quote_plus(query), pagenum*10)
request = urllib2.Request(result_url) request = compat_urllib_request.Request(result_url)
try: try:
page = urllib2.urlopen(request).read() page = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
# Extract video identifiers # Extract video identifiers
@ -1438,6 +1520,8 @@ class GoogleSearchIE(InfoExtractor):
class YahooSearchIE(InfoExtractor): class YahooSearchIE(InfoExtractor):
"""Information Extractor for Yahoo! Video search queries.""" """Information Extractor for Yahoo! Video search queries."""
_WORKING = False
_VALID_URL = r'yvsearch(\d+|all)?:[\s\S]+' _VALID_URL = r'yvsearch(\d+|all)?:[\s\S]+'
_TEMPLATE_URL = 'http://video.yahoo.com/search/?p=%s&o=%s' _TEMPLATE_URL = 'http://video.yahoo.com/search/?p=%s&o=%s'
_VIDEO_INDICATOR = r'href="http://video\.yahoo\.com/watch/([0-9]+/[0-9]+)"' _VIDEO_INDICATOR = r'href="http://video\.yahoo\.com/watch/([0-9]+/[0-9]+)"'
@ -1470,7 +1554,7 @@ class YahooSearchIE(InfoExtractor):
return return
else: else:
try: try:
n = long(prefix) n = int(prefix)
if n <= 0: if n <= 0:
self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query)) self._downloader.trouble(u'ERROR: invalid download number %s for query "%s"' % (n, query))
return return
@ -1492,12 +1576,12 @@ class YahooSearchIE(InfoExtractor):
while True: while True:
self.report_download_page(query, pagenum) self.report_download_page(query, pagenum)
result_url = self._TEMPLATE_URL % (urllib.quote_plus(query), pagenum) result_url = self._TEMPLATE_URL % (compat_urllib_parse.quote_plus(query), pagenum)
request = urllib2.Request(result_url) request = compat_urllib_request.Request(result_url)
try: try:
page = urllib2.urlopen(request).read() page = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
# Extract video identifiers # Extract video identifiers
@ -1523,10 +1607,10 @@ class YahooSearchIE(InfoExtractor):
class YoutubePlaylistIE(InfoExtractor): class YoutubePlaylistIE(InfoExtractor):
"""Information Extractor for YouTube playlists.""" """Information Extractor for YouTube playlists."""
_VALID_URL = r'(?:(?:https?://)?(?:\w+\.)?youtube\.com/(?:(?:course|view_play_list|my_playlists|artist|playlist)\?.*?(p|a|list)=|user/.*?/user/|p/|user/.*?#[pg]/c/)(?:PL|EC)?|PL|EC)([0-9A-Za-z-_]+)(?:/.*?/([0-9A-Za-z_-]+))?.*' _VALID_URL = r'(?:(?:https?://)?(?:\w+\.)?youtube\.com/(?:(?:course|view_play_list|my_playlists|artist|playlist)\?.*?(p|a|list)=|user/.*?/user/|p/|user/.*?#[pg]/c/)(?:PL|EC)?|PL|EC)([0-9A-Za-z-_]{10,})(?:/.*?/([0-9A-Za-z_-]+))?.*'
_TEMPLATE_URL = 'http://www.youtube.com/%s?%s=%s&page=%s&gl=US&hl=en' _TEMPLATE_URL = 'http://www.youtube.com/%s?%s=%s&page=%s&gl=US&hl=en'
_VIDEO_INDICATOR_TEMPLATE = r'/watch\?v=(.+?)&amp;([^&"]+&amp;)*list=.*?%s' _VIDEO_INDICATOR_TEMPLATE = r'/watch\?v=(.+?)&amp;([^&"]+&amp;)*list=.*?%s'
_MORE_PAGES_INDICATOR = r'yt-uix-pager-next' _MORE_PAGES_INDICATOR = u"Next \N{RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK}"
IE_NAME = u'youtube:playlist' IE_NAME = u'youtube:playlist'
def __init__(self, downloader=None): def __init__(self, downloader=None):
@ -1563,11 +1647,11 @@ class YoutubePlaylistIE(InfoExtractor):
while True: while True:
self.report_download_page(playlist_id, pagenum) self.report_download_page(playlist_id, pagenum)
url = self._TEMPLATE_URL % (playlist_access, playlist_prefix, playlist_id, pagenum) url = self._TEMPLATE_URL % (playlist_access, playlist_prefix, playlist_id, pagenum)
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
page = urllib2.urlopen(request).read() page = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
# Extract video identifiers # Extract video identifiers
@ -1577,10 +1661,12 @@ class YoutubePlaylistIE(InfoExtractor):
ids_in_page.append(mobj.group(1)) ids_in_page.append(mobj.group(1))
video_ids.extend(ids_in_page) video_ids.extend(ids_in_page)
if re.search(self._MORE_PAGES_INDICATOR, page) is None: if self._MORE_PAGES_INDICATOR not in page:
break break
pagenum = pagenum + 1 pagenum = pagenum + 1
total = len(video_ids)
playliststart = self._downloader.params.get('playliststart', 1) - 1 playliststart = self._downloader.params.get('playliststart', 1) - 1
playlistend = self._downloader.params.get('playlistend', -1) playlistend = self._downloader.params.get('playlistend', -1)
if playlistend == -1: if playlistend == -1:
@ -1588,6 +1674,11 @@ class YoutubePlaylistIE(InfoExtractor):
else: else:
video_ids = video_ids[playliststart:playlistend] video_ids = video_ids[playliststart:playlistend]
if len(video_ids) == total:
self._downloader.to_screen(u'[youtube] PL %s: Found %i videos' % (playlist_id, total))
else:
self._downloader.to_screen(u'[youtube] PL %s: Found %i videos, downloading %i' % (playlist_id, total, len(video_ids)))
for id in video_ids: for id in video_ids:
self._downloader.download(['http://www.youtube.com/watch?v=%s' % id]) self._downloader.download(['http://www.youtube.com/watch?v=%s' % id])
return return
@ -1598,7 +1689,7 @@ class YoutubeChannelIE(InfoExtractor):
_VALID_URL = r"^(?:https?://)?(?:youtu\.be|(?:\w+\.)?youtube(?:-nocookie)?\.com)/channel/([0-9A-Za-z_-]+)(?:/.*)?$" _VALID_URL = r"^(?:https?://)?(?:youtu\.be|(?:\w+\.)?youtube(?:-nocookie)?\.com)/channel/([0-9A-Za-z_-]+)(?:/.*)?$"
_TEMPLATE_URL = 'http://www.youtube.com/channel/%s/videos?sort=da&flow=list&view=0&page=%s&gl=US&hl=en' _TEMPLATE_URL = 'http://www.youtube.com/channel/%s/videos?sort=da&flow=list&view=0&page=%s&gl=US&hl=en'
_MORE_PAGES_INDICATOR = r'yt-uix-button-content">Next' # TODO _MORE_PAGES_INDICATOR = u"Next \N{RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK}"
IE_NAME = u'youtube:channel' IE_NAME = u'youtube:channel'
def report_download_page(self, channel_id, pagenum): def report_download_page(self, channel_id, pagenum):
@ -1620,11 +1711,11 @@ class YoutubeChannelIE(InfoExtractor):
while True: while True:
self.report_download_page(channel_id, pagenum) self.report_download_page(channel_id, pagenum)
url = self._TEMPLATE_URL % (channel_id, pagenum) url = self._TEMPLATE_URL % (channel_id, pagenum)
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
page = urllib2.urlopen(request).read() page = compat_urllib_request.urlopen(request).read().decode('utf8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
# Extract video identifiers # Extract video identifiers
@ -1634,10 +1725,12 @@ class YoutubeChannelIE(InfoExtractor):
ids_in_page.append(mobj.group(1)) ids_in_page.append(mobj.group(1))
video_ids.extend(ids_in_page) video_ids.extend(ids_in_page)
if re.search(self._MORE_PAGES_INDICATOR, page) is None: if self._MORE_PAGES_INDICATOR not in page:
break break
pagenum = pagenum + 1 pagenum = pagenum + 1
self._downloader.to_screen(u'[youtube] Channel %s: Found %i videos' % (channel_id, len(video_ids)))
for id in video_ids: for id in video_ids:
self._downloader.download(['http://www.youtube.com/watch?v=%s' % id]) self._downloader.download(['http://www.youtube.com/watch?v=%s' % id])
return return
@ -1682,12 +1775,12 @@ class YoutubeUserIE(InfoExtractor):
start_index = pagenum * self._GDATA_PAGE_SIZE + 1 start_index = pagenum * self._GDATA_PAGE_SIZE + 1
self.report_download_page(username, start_index) self.report_download_page(username, start_index)
request = urllib2.Request(self._GDATA_URL % (username, self._GDATA_PAGE_SIZE, start_index)) request = compat_urllib_request.Request(self._GDATA_URL % (username, self._GDATA_PAGE_SIZE, start_index))
try: try:
page = urllib2.urlopen(request).read() page = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
# Extract video identifiers # Extract video identifiers
@ -1752,14 +1845,14 @@ class BlipTVUserIE(InfoExtractor):
page_base = 'http://m.blip.tv/pr/show_get_full_episode_list?users_id=%s&lite=0&esi=1' page_base = 'http://m.blip.tv/pr/show_get_full_episode_list?users_id=%s&lite=0&esi=1'
request = urllib2.Request(url) request = compat_urllib_request.Request(url)
try: try:
page = urllib2.urlopen(request).read().decode('utf-8') page = compat_urllib_request.urlopen(request).read().decode('utf-8')
mobj = re.search(r'data-users-id="([^"]+)"', page) mobj = re.search(r'data-users-id="([^"]+)"', page)
page_base = page_base % mobj.group(1) page_base = page_base % mobj.group(1)
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
@ -1774,11 +1867,11 @@ class BlipTVUserIE(InfoExtractor):
while True: while True:
self.report_download_page(username, pagenum) self.report_download_page(username, pagenum)
request = urllib2.Request( page_base + "&page=" + str(pagenum) ) request = compat_urllib_request.Request( page_base + "&page=" + str(pagenum) )
try: try:
page = urllib2.urlopen(request).read().decode('utf-8') page = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % str(err))
return return
@ -1822,10 +1915,6 @@ class DepositFilesIE(InfoExtractor):
"""Information extractor for depositfiles.com""" """Information extractor for depositfiles.com"""
_VALID_URL = r'(?:http://)?(?:\w+\.)?depositfiles\.com/(?:../(?#locale))?files/(.+)' _VALID_URL = r'(?:http://)?(?:\w+\.)?depositfiles\.com/(?:../(?#locale))?files/(.+)'
IE_NAME = u'DepositFiles'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, file_id): def report_download_webpage(self, file_id):
"""Report webpage download.""" """Report webpage download."""
@ -1842,12 +1931,12 @@ class DepositFilesIE(InfoExtractor):
# Retrieve file webpage with 'Free download' button pressed # Retrieve file webpage with 'Free download' button pressed
free_download_indication = { 'gateway_result' : '1' } free_download_indication = { 'gateway_result' : '1' }
request = urllib2.Request(url, urllib.urlencode(free_download_indication)) request = compat_urllib_request.Request(url, compat_urllib_parse.urlencode(free_download_indication))
try: try:
self.report_download_webpage(file_id) self.report_download_webpage(file_id)
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve file webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve file webpage: %s' % compat_str(err))
return return
# Search for the real file URL # Search for the real file URL
@ -1875,18 +1964,17 @@ class DepositFilesIE(InfoExtractor):
return [{ return [{
'id': file_id.decode('utf-8'), 'id': file_id.decode('utf-8'),
'url': file_url.decode('utf-8'), 'url': file_url.decode('utf-8'),
'uploader': u'NA', 'uploader': None,
'upload_date': u'NA', 'upload_date': None,
'title': file_title, 'title': file_title,
'ext': file_extension.decode('utf-8'), 'ext': file_extension.decode('utf-8'),
'format': u'NA',
'player_url': None,
}] }]
class FacebookIE(InfoExtractor): class FacebookIE(InfoExtractor):
"""Information Extractor for Facebook""" """Information Extractor for Facebook"""
_WORKING = False
_VALID_URL = r'^(?:https?://)?(?:\w+\.)?facebook\.com/(?:video/video|photo)\.php\?(?:.*?)v=(?P<ID>\d+)(?:.*)' _VALID_URL = r'^(?:https?://)?(?:\w+\.)?facebook\.com/(?:video/video|photo)\.php\?(?:.*?)v=(?P<ID>\d+)(?:.*)'
_LOGIN_URL = 'https://login.facebook.com/login.php?m&next=http%3A%2F%2Fm.facebook.com%2Fhome.php&' _LOGIN_URL = 'https://login.facebook.com/login.php?m&next=http%3A%2F%2Fm.facebook.com%2Fhome.php&'
_NETRC_MACHINE = 'facebook' _NETRC_MACHINE = 'facebook'
@ -1929,7 +2017,7 @@ class FacebookIE(InfoExtractor):
for piece in data.keys(): for piece in data.keys():
mobj = re.search(data[piece], video_webpage) mobj = re.search(data[piece], video_webpage)
if mobj is not None: if mobj is not None:
video_info[piece] = urllib.unquote_plus(mobj.group(1).decode("unicode_escape")) video_info[piece] = compat_urllib_parse.unquote_plus(mobj.group(1).decode("unicode_escape"))
# Video urls # Video urls
video_urls = {} video_urls = {}
@ -1938,7 +2026,7 @@ class FacebookIE(InfoExtractor):
if mobj is not None: if mobj is not None:
# URL is in a Javascript segment inside an escaped Unicode format within # URL is in a Javascript segment inside an escaped Unicode format within
# the generally utf-8 page # the generally utf-8 page
video_urls[fmt] = urllib.unquote_plus(mobj.group(1).decode("unicode_escape")) video_urls[fmt] = compat_urllib_parse.unquote_plus(mobj.group(1).decode("unicode_escape"))
video_info['video_urls'] = video_urls video_info['video_urls'] = video_urls
return video_info return video_info
@ -1963,8 +2051,8 @@ class FacebookIE(InfoExtractor):
password = info[2] password = info[2]
else: else:
raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE) raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE)
except (IOError, netrc.NetrcParseError), err: except (IOError, netrc.NetrcParseError) as err:
self._downloader.to_stderr(u'WARNING: parsing .netrc: %s' % str(err)) self._downloader.to_stderr(u'WARNING: parsing .netrc: %s' % compat_str(err))
return return
if useremail is None: if useremail is None:
@ -1976,15 +2064,15 @@ class FacebookIE(InfoExtractor):
'pass': password, 'pass': password,
'login': 'Log+In' 'login': 'Log+In'
} }
request = urllib2.Request(self._LOGIN_URL, urllib.urlencode(login_form)) request = compat_urllib_request.Request(self._LOGIN_URL, compat_urllib_parse.urlencode(login_form))
try: try:
self.report_login() self.report_login()
login_results = urllib2.urlopen(request).read() login_results = compat_urllib_request.urlopen(request).read()
if re.search(r'<form(.*)name="login"(.*)</form>', login_results) is not None: if re.search(r'<form(.*)name="login"(.*)</form>', login_results) is not None:
self._downloader.to_stderr(u'WARNING: unable to log in: bad username/password, or exceded login rate limit (~3/min). Check credentials or wait.') self._downloader.to_stderr(u'WARNING: unable to log in: bad username/password, or exceded login rate limit (~3/min). Check credentials or wait.')
return return
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.to_stderr(u'WARNING: unable to log in: %s' % str(err)) self._downloader.to_stderr(u'WARNING: unable to log in: %s' % compat_str(err))
return return
def _real_extract(self, url): def _real_extract(self, url):
@ -1996,12 +2084,12 @@ class FacebookIE(InfoExtractor):
# Get video webpage # Get video webpage
self.report_video_webpage_download(video_id) self.report_video_webpage_download(video_id)
request = urllib2.Request('https://www.facebook.com/video/video.php?v=%s' % video_id) request = compat_urllib_request.Request('https://www.facebook.com/video/video.php?v=%s' % video_id)
try: try:
page = urllib2.urlopen(request) page = compat_urllib_request.urlopen(request)
video_webpage = page.read() video_webpage = page.read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % compat_str(err))
return return
# Start extracting information # Start extracting information
@ -2031,7 +2119,7 @@ class FacebookIE(InfoExtractor):
video_thumbnail = video_info['thumbnail'] video_thumbnail = video_info['thumbnail']
# upload date # upload date
upload_date = u'NA' upload_date = None
if 'upload_date' in video_info: if 'upload_date' in video_info:
upload_time = video_info['upload_date'] upload_time = video_info['upload_date']
timetuple = email.utils.parsedate_tz(upload_time) timetuple = email.utils.parsedate_tz(upload_time)
@ -2045,7 +2133,7 @@ class FacebookIE(InfoExtractor):
video_description = video_info.get('description', 'No description available.') video_description = video_info.get('description', 'No description available.')
url_map = video_info['video_urls'] url_map = video_info['video_urls']
if len(url_map.keys()) > 0: if url_map:
# Decide which formats to download # Decide which formats to download
req_format = self._downloader.params.get('format', None) req_format = self._downloader.params.get('format', None)
format_limit = self._downloader.params.get('format_limit', None) format_limit = self._downloader.params.get('format_limit', None)
@ -2086,7 +2174,6 @@ class FacebookIE(InfoExtractor):
'format': (format_param is None and u'NA' or format_param.decode('utf-8')), 'format': (format_param is None and u'NA' or format_param.decode('utf-8')),
'thumbnail': video_thumbnail.decode('utf-8'), 'thumbnail': video_thumbnail.decode('utf-8'),
'description': video_description.decode('utf-8'), 'description': video_description.decode('utf-8'),
'player_url': None,
}) })
return results return results
@ -2116,11 +2203,11 @@ class BlipTVIE(InfoExtractor):
else: else:
cchar = '?' cchar = '?'
json_url = url + cchar + 'skin=json&version=2&no_wrap=1' json_url = url + cchar + 'skin=json&version=2&no_wrap=1'
request = urllib2.Request(json_url.encode('utf-8')) request = compat_urllib_request.Request(json_url)
self.report_extraction(mobj.group(1)) self.report_extraction(mobj.group(1))
info = None info = None
try: try:
urlh = urllib2.urlopen(request) urlh = compat_urllib_request.urlopen(request)
if urlh.headers.get('Content-Type', '').startswith('video/'): # Direct download if urlh.headers.get('Content-Type', '').startswith('video/'): # Direct download
basename = url.split('/')[-1] basename = url.split('/')[-1]
title,ext = os.path.splitext(basename) title,ext = os.path.splitext(basename)
@ -2130,18 +2217,21 @@ class BlipTVIE(InfoExtractor):
info = { info = {
'id': title, 'id': title,
'url': url, 'url': url,
'uploader': None,
'upload_date': None,
'title': title, 'title': title,
'ext': ext, 'ext': ext,
'urlhandle': urlh 'urlhandle': urlh
} }
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video info webpage: %s' % compat_str(err))
return return
if info is None: # Regular URL if info is None: # Regular URL
try: try:
json_code = urlh.read() json_code_bytes = urlh.read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: json_code = json_code_bytes.decode('utf-8')
self._downloader.trouble(u'ERROR: unable to read video info webpage: %s' % str(err)) except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to read video info webpage: %s' % compat_str(err))
return return
try: try:
@ -2170,7 +2260,7 @@ class BlipTVIE(InfoExtractor):
'description': data['description'], 'description': data['description'],
'player_url': data['embedUrl'] 'player_url': data['embedUrl']
} }
except (ValueError,KeyError), err: except (ValueError,KeyError) as err:
self._downloader.trouble(u'ERROR: unable to parse video information: %s' % repr(err)) self._downloader.trouble(u'ERROR: unable to parse video information: %s' % repr(err))
return return
@ -2187,10 +2277,6 @@ class MyVideoIE(InfoExtractor):
def __init__(self, downloader=None): def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader) InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, video_id):
"""Report webpage download."""
self._downloader.to_screen(u'[myvideo] %s: Downloading webpage' % video_id)
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[myvideo] %s: Extracting information' % video_id) self._downloader.to_screen(u'[myvideo] %s: Extracting information' % video_id)
@ -2204,13 +2290,8 @@ class MyVideoIE(InfoExtractor):
video_id = mobj.group(1) video_id = mobj.group(1)
# Get video webpage # Get video webpage
request = urllib2.Request('http://www.myvideo.de/watch/%s' % video_id) webpage_url = 'http://www.myvideo.de/watch/%s' % video_id
try: webpage = self._download_webpage(webpage_url, video_id)
self.report_download_webpage(video_id)
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err))
return
self.report_extraction(video_id) self.report_extraction(video_id)
mobj = re.search(r'<link rel=\'image_src\' href=\'(http://is[0-9].myvideo\.de/de/movie[0-9]+/[a-f0-9]+)/thumbs/[^.]+\.jpg\' />', mobj = re.search(r'<link rel=\'image_src\' href=\'(http://is[0-9].myvideo\.de/de/movie[0-9]+/[a-f0-9]+)/thumbs/[^.]+\.jpg\' />',
@ -2230,20 +2311,53 @@ class MyVideoIE(InfoExtractor):
return [{ return [{
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'uploader': u'NA', 'uploader': None,
'upload_date': u'NA', 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': u'flv', 'ext': u'flv',
'format': u'NA',
'player_url': None,
}] }]
class ComedyCentralIE(InfoExtractor): class ComedyCentralIE(InfoExtractor):
"""Information extractor for The Daily Show and Colbert Report """ """Information extractor for The Daily Show and Colbert Report """
_VALID_URL = r'^(:(?P<shortname>tds|thedailyshow|cr|colbert|colbertnation|colbertreport))|(https?://)?(www\.)?(?P<showname>thedailyshow|colbertnation)\.com/full-episodes/(?P<episode>.*)$' # urls can be abbreviations like :thedailyshow or :colbert
# urls for episodes like:
# or urls for clips like: http://www.thedailyshow.com/watch/mon-december-10-2012/any-given-gun-day
# or: http://www.colbertnation.com/the-colbert-report-videos/421667/november-29-2012/moon-shattering-news
# or: http://www.colbertnation.com/the-colbert-report-collections/422008/festival-of-lights/79524
_VALID_URL = r"""^(:(?P<shortname>tds|thedailyshow|cr|colbert|colbertnation|colbertreport)
|(https?://)?(www\.)?
(?P<showname>thedailyshow|colbertnation)\.com/
(full-episodes/(?P<episode>.*)|
(?P<clip>
(the-colbert-report-(videos|collections)/(?P<clipID>[0-9]+)/[^/]*/(?P<cntitle>.*?))
|(watch/(?P<date>[^/]*)/(?P<tdstitle>.*)))))
$"""
IE_NAME = u'comedycentral' IE_NAME = u'comedycentral'
_available_formats = ['3500', '2200', '1700', '1200', '750', '400']
_video_extensions = {
'3500': 'mp4',
'2200': 'mp4',
'1700': 'mp4',
'1200': 'mp4',
'750': 'mp4',
'400': 'mp4',
}
_video_dimensions = {
'3500': '1280x720',
'2200': '960x540',
'1700': '768x432',
'1200': '640x360',
'750': '512x288',
'400': '384x216',
}
def suitable(self, url):
"""Receives a URL and returns True if suitable for this IE."""
return re.match(self._VALID_URL, url, re.VERBOSE) is not None
def report_extraction(self, episode_id): def report_extraction(self, episode_id):
self._downloader.to_screen(u'[comedycentral] %s: Extracting information' % episode_id) self._downloader.to_screen(u'[comedycentral] %s: Extracting information' % episode_id)
@ -2256,8 +2370,15 @@ class ComedyCentralIE(InfoExtractor):
def report_player_url(self, episode_id): def report_player_url(self, episode_id):
self._downloader.to_screen(u'[comedycentral] %s: Determining player URL' % episode_id) self._downloader.to_screen(u'[comedycentral] %s: Determining player URL' % episode_id)
def _print_formats(self, formats):
print('Available formats:')
for x in formats:
print('%s\t:\t%s\t[%s]' %(x, self._video_extensions.get(x, 'mp4'), self._video_dimensions.get(x, '???')))
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
@ -2267,26 +2388,33 @@ class ComedyCentralIE(InfoExtractor):
url = u'http://www.thedailyshow.com/full-episodes/' url = u'http://www.thedailyshow.com/full-episodes/'
else: else:
url = u'http://www.colbertnation.com/full-episodes/' url = u'http://www.colbertnation.com/full-episodes/'
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url, re.VERBOSE)
assert mobj is not None assert mobj is not None
if mobj.group('clip'):
if mobj.group('showname') == 'thedailyshow':
epTitle = mobj.group('tdstitle')
else:
epTitle = mobj.group('cntitle')
dlNewest = False
else:
dlNewest = not mobj.group('episode') dlNewest = not mobj.group('episode')
if dlNewest: if dlNewest:
epTitle = mobj.group('showname') epTitle = mobj.group('showname')
else: else:
epTitle = mobj.group('episode') epTitle = mobj.group('episode')
req = urllib2.Request(url) req = compat_urllib_request.Request(url)
self.report_extraction(epTitle) self.report_extraction(epTitle)
try: try:
htmlHandle = urllib2.urlopen(req) htmlHandle = compat_urllib_request.urlopen(req)
html = htmlHandle.read() html = htmlHandle.read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % unicode(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
if dlNewest: if dlNewest:
url = htmlHandle.geturl() url = htmlHandle.geturl()
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url, re.VERBOSE)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: Invalid redirected URL: ' + url) self._downloader.trouble(u'ERROR: Invalid redirected URL: ' + url)
return return
@ -2295,27 +2423,36 @@ class ComedyCentralIE(InfoExtractor):
return return
epTitle = mobj.group('episode') epTitle = mobj.group('episode')
mMovieParams = re.findall('(?:<param name="movie" value="|var url = ")(http://media.mtvnservices.com/([^"]*episode.*?:.*?))"', html) mMovieParams = re.findall('(?:<param name="movie" value="|var url = ")(http://media.mtvnservices.com/([^"]*(?:episode|video).*?:.*?))"', html)
if len(mMovieParams) == 0: if len(mMovieParams) == 0:
# The Colbert Report embeds the information in a without
# a URL prefix; so extract the alternate reference
# and then add the URL prefix manually.
altMovieParams = re.findall('data-mgid="([^"]*(?:episode|video).*?:.*?)"', html)
if len(altMovieParams) == 0:
self._downloader.trouble(u'ERROR: unable to find Flash URL in webpage ' + url) self._downloader.trouble(u'ERROR: unable to find Flash URL in webpage ' + url)
return return
else:
mMovieParams = [("http://media.mtvnservices.com/" + altMovieParams[0], altMovieParams[0])]
playerUrl_raw = mMovieParams[0][0] playerUrl_raw = mMovieParams[0][0]
self.report_player_url(epTitle) self.report_player_url(epTitle)
try: try:
urlHandle = urllib2.urlopen(playerUrl_raw) urlHandle = compat_urllib_request.urlopen(playerUrl_raw)
playerUrl = urlHandle.geturl() playerUrl = urlHandle.geturl()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to find out player URL: ' + unicode(err)) self._downloader.trouble(u'ERROR: unable to find out player URL: ' + compat_str(err))
return return
uri = mMovieParams[0][1] uri = mMovieParams[0][1]
indexUrl = 'http://shadow.comedycentral.com/feeds/video_player/mrss/?' + urllib.urlencode({'uri': uri}) indexUrl = 'http://shadow.comedycentral.com/feeds/video_player/mrss/?' + compat_urllib_parse.urlencode({'uri': uri})
self.report_index_download(epTitle) self.report_index_download(epTitle)
try: try:
indexXml = urllib2.urlopen(indexUrl).read() indexXml = compat_urllib_request.urlopen(indexUrl).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download episode index: ' + unicode(err)) self._downloader.trouble(u'ERROR: unable to download episode index: ' + compat_str(err))
return return
results = [] results = []
@ -2330,13 +2467,13 @@ class ComedyCentralIE(InfoExtractor):
officialDate = itemEl.findall('./pubDate')[0].text officialDate = itemEl.findall('./pubDate')[0].text
configUrl = ('http://www.comedycentral.com/global/feeds/entertainment/media/mediaGenEntertainment.jhtml?' + configUrl = ('http://www.comedycentral.com/global/feeds/entertainment/media/mediaGenEntertainment.jhtml?' +
urllib.urlencode({'uri': mediaId})) compat_urllib_parse.urlencode({'uri': mediaId}))
configReq = urllib2.Request(configUrl) configReq = compat_urllib_request.Request(configUrl)
self.report_config_download(epTitle) self.report_config_download(epTitle)
try: try:
configXml = urllib2.urlopen(configReq).read() configXml = compat_urllib_request.urlopen(configReq).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: %s' % unicode(err)) self._downloader.trouble(u'ERROR: unable to download webpage: %s' % compat_str(err))
return return
cdoc = xml.etree.ElementTree.fromstring(configXml) cdoc = xml.etree.ElementTree.fromstring(configXml)
@ -2349,9 +2486,30 @@ class ComedyCentralIE(InfoExtractor):
self._downloader.trouble(u'\nERROR: unable to download ' + mediaId + ': No videos found') self._downloader.trouble(u'\nERROR: unable to download ' + mediaId + ': No videos found')
continue continue
if self._downloader.params.get('listformats', None):
self._print_formats([i[0] for i in turls])
return
# For now, just pick the highest bitrate # For now, just pick the highest bitrate
format,video_url = turls[-1] format,video_url = turls[-1]
# Get the format arg from the arg stream
req_format = self._downloader.params.get('format', None)
# Select format if we can find one
for f,v in turls:
if f == req_format:
format, video_url = f, v
break
# Patch to download from alternative CDN, which does not
# break on current RTMPDump builds
broken_cdn = "rtmpe://viacomccstrmfs.fplive.net/viacomccstrm/gsp.comedystor/"
better_cdn = "rtmpe://cp10740.edgefcs.net/ondemand/mtvnorigin/gsp.comedystor/"
if video_url.startswith(broken_cdn):
video_url = video_url.replace(broken_cdn, better_cdn)
effTitle = showId + u'-' + epTitle effTitle = showId + u'-' + epTitle
info = { info = {
'id': shortMediaId, 'id': shortMediaId,
@ -2363,7 +2521,7 @@ class ComedyCentralIE(InfoExtractor):
'format': format, 'format': format,
'thumbnail': None, 'thumbnail': None,
'description': officialTitle, 'description': officialTitle,
'player_url': playerUrl 'player_url': None #playerUrl
} }
results.append(info) results.append(info)
@ -2393,12 +2551,12 @@ class EscapistIE(InfoExtractor):
self.report_extraction(showName) self.report_extraction(showName)
try: try:
webPage = urllib2.urlopen(url) webPage = compat_urllib_request.urlopen(url)
webPageBytes = webPage.read() webPageBytes = webPage.read()
m = re.match(r'text/html; charset="?([^"]+)"?', webPage.headers['Content-Type']) m = re.match(r'text/html; charset="?([^"]+)"?', webPage.headers['Content-Type'])
webPage = webPageBytes.decode(m.group(1) if m else 'utf-8') webPage = webPageBytes.decode(m.group(1) if m else 'utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download webpage: ' + unicode(err)) self._downloader.trouble(u'ERROR: unable to download webpage: ' + compat_str(err))
return return
descMatch = re.search('<meta name="description" content="([^"]*)"', webPage) descMatch = re.search('<meta name="description" content="([^"]*)"', webPage)
@ -2408,13 +2566,15 @@ class EscapistIE(InfoExtractor):
playerUrlMatch = re.search('<meta property="og:video" content="([^"]*)"', webPage) playerUrlMatch = re.search('<meta property="og:video" content="([^"]*)"', webPage)
playerUrl = unescapeHTML(playerUrlMatch.group(1)) playerUrl = unescapeHTML(playerUrlMatch.group(1))
configUrlMatch = re.search('config=(.*)$', playerUrl) configUrlMatch = re.search('config=(.*)$', playerUrl)
configUrl = urllib2.unquote(configUrlMatch.group(1)) configUrl = compat_urllib_parse.unquote(configUrlMatch.group(1))
self.report_config_download(showName) self.report_config_download(showName)
try: try:
configJSON = urllib2.urlopen(configUrl).read() configJSON = compat_urllib_request.urlopen(configUrl)
except (urllib2.URLError, httplib.HTTPException, socket.error), err: m = re.match(r'text/html; charset="?([^"]+)"?', configJSON.headers['Content-Type'])
self._downloader.trouble(u'ERROR: unable to download configuration: ' + unicode(err)) configJSON = configJSON.read().decode(m.group(1) if m else 'utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download configuration: ' + compat_str(err))
return return
# Technically, it's JavaScript, not JSON # Technically, it's JavaScript, not JSON
@ -2422,8 +2582,8 @@ class EscapistIE(InfoExtractor):
try: try:
config = json.loads(configJSON) config = json.loads(configJSON)
except (ValueError,), err: except (ValueError,) as err:
self._downloader.trouble(u'ERROR: Invalid JSON in configuration file: ' + unicode(err)) self._downloader.trouble(u'ERROR: Invalid JSON in configuration file: ' + compat_str(err))
return return
playlist = config['playlist'] playlist = config['playlist']
@ -2436,7 +2596,6 @@ class EscapistIE(InfoExtractor):
'upload_date': None, 'upload_date': None,
'title': showName, 'title': showName,
'ext': 'flv', 'ext': 'flv',
'format': 'flv',
'thumbnail': imgUrl, 'thumbnail': imgUrl,
'description': description, 'description': description,
'player_url': playerUrl, 'player_url': playerUrl,
@ -2448,12 +2607,13 @@ class EscapistIE(InfoExtractor):
class CollegeHumorIE(InfoExtractor): class CollegeHumorIE(InfoExtractor):
"""Information extractor for collegehumor.com""" """Information extractor for collegehumor.com"""
_WORKING = False
_VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/video/(?P<videoid>[0-9]+)/(?P<shorttitle>.*)$' _VALID_URL = r'^(?:https?://)?(?:www\.)?collegehumor\.com/video/(?P<videoid>[0-9]+)/(?P<shorttitle>.*)$'
IE_NAME = u'collegehumor' IE_NAME = u'collegehumor'
def report_webpage(self, video_id): def report_manifest(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, video_id)) self._downloader.to_screen(u'[%s] %s: Downloading XML manifest' % (self.IE_NAME, video_id))
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
@ -2466,31 +2626,18 @@ class CollegeHumorIE(InfoExtractor):
return return
video_id = mobj.group('videoid') video_id = mobj.group('videoid')
self.report_webpage(video_id)
request = urllib2.Request(url)
try:
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err))
return
m = re.search(r'id="video:(?P<internalvideoid>[0-9]+)"', webpage)
if m is None:
self._downloader.trouble(u'ERROR: Cannot extract internal video ID')
return
internal_video_id = m.group('internalvideoid')
info = { info = {
'id': video_id, 'id': video_id,
'internal_id': internal_video_id, 'uploader': None,
'upload_date': None,
} }
self.report_extraction(video_id) self.report_extraction(video_id)
xmlUrl = 'http://www.collegehumor.com/moogaloop/video:' + internal_video_id xmlUrl = 'http://www.collegehumor.com/moogaloop/video/' + video_id
try: try:
metaXml = urllib2.urlopen(xmlUrl).read() metaXml = compat_urllib_request.urlopen(xmlUrl).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info XML: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video info XML: %s' % compat_str(err))
return return
mdoc = xml.etree.ElementTree.fromstring(metaXml) mdoc = xml.etree.ElementTree.fromstring(metaXml)
@ -2498,14 +2645,34 @@ class CollegeHumorIE(InfoExtractor):
videoNode = mdoc.findall('./video')[0] videoNode = mdoc.findall('./video')[0]
info['description'] = videoNode.findall('./description')[0].text info['description'] = videoNode.findall('./description')[0].text
info['title'] = videoNode.findall('./caption')[0].text info['title'] = videoNode.findall('./caption')[0].text
info['url'] = videoNode.findall('./file')[0].text
info['thumbnail'] = videoNode.findall('./thumbnail')[0].text info['thumbnail'] = videoNode.findall('./thumbnail')[0].text
info['ext'] = info['url'].rpartition('.')[2] manifest_url = videoNode.findall('./file')[0].text
info['format'] = info['ext']
except IndexError: except IndexError:
self._downloader.trouble(u'\nERROR: Invalid metadata XML file') self._downloader.trouble(u'\nERROR: Invalid metadata XML file')
return return
manifest_url += '?hdcore=2.10.3'
self.report_manifest(video_id)
try:
manifestXml = compat_urllib_request.urlopen(manifest_url).read()
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info XML: %s' % compat_str(err))
return
adoc = xml.etree.ElementTree.fromstring(manifestXml)
try:
media_node = adoc.findall('./{http://ns.adobe.com/f4m/1.0}media')[0]
node_id = media_node.attrib['url']
video_id = adoc.findall('./{http://ns.adobe.com/f4m/1.0}id')[0].text
except IndexError as err:
self._downloader.trouble(u'\nERROR: Invalid manifest file')
return
url_pr = compat_urllib_parse_urlparse(manifest_url)
url = url_pr.scheme + '://' + url_pr.netloc + '/z' + video_id[:-2] + '/' + node_id + 'Seg1-Frag1'
info['url'] = url
info['ext'] = 'f4f'
return [info] return [info]
@ -2515,10 +2682,6 @@ class XVideosIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?xvideos\.com/video([0-9]+)(?:.*)' _VALID_URL = r'^(?:https?://)?(?:www\.)?xvideos\.com/video([0-9]+)(?:.*)'
IE_NAME = u'xvideos' IE_NAME = u'xvideos'
def report_webpage(self, video_id):
"""Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, video_id))
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, video_id)) self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, video_id))
@ -2528,16 +2691,9 @@ class XVideosIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
video_id = mobj.group(1).decode('utf-8') video_id = mobj.group(1)
self.report_webpage(video_id) webpage = self._download_webpage(url, video_id)
request = urllib2.Request(r'http://www.xvideos.com/video' + video_id)
try:
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err))
return
self.report_extraction(video_id) self.report_extraction(video_id)
@ -2547,7 +2703,7 @@ class XVideosIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video url') self._downloader.trouble(u'ERROR: unable to extract video url')
return return
video_url = urllib2.unquote(mobj.group(1).decode('utf-8')) video_url = compat_urllib_parse.unquote(mobj.group(1))
# Extract title # Extract title
@ -2555,7 +2711,7 @@ class XVideosIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video title') self._downloader.trouble(u'ERROR: unable to extract video title')
return return
video_title = mobj.group(1).decode('utf-8') video_title = mobj.group(1)
# Extract video thumbnail # Extract video thumbnail
@ -2563,7 +2719,7 @@ class XVideosIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video thumbnail') self._downloader.trouble(u'ERROR: unable to extract video thumbnail')
return return
video_thumbnail = mobj.group(0).decode('utf-8') video_thumbnail = mobj.group(0)
info = { info = {
'id': video_id, 'id': video_id,
@ -2572,10 +2728,8 @@ class XVideosIE(InfoExtractor):
'upload_date': None, 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': 'flv', 'ext': 'flv',
'format': 'flv',
'thumbnail': video_thumbnail, 'thumbnail': video_thumbnail,
'description': None, 'description': None,
'player_url': None,
} }
return [info] return [info]
@ -2596,13 +2750,13 @@ class SoundcloudIE(InfoExtractor):
def __init__(self, downloader=None): def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader) InfoExtractor.__init__(self, downloader)
def report_webpage(self, video_id): def report_resolve(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, video_id)) self._downloader.to_screen(u'[%s] %s: Resolving id' % (self.IE_NAME, video_id))
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, video_id)) self._downloader.to_screen(u'[%s] %s: Retrieving stream' % (self.IE_NAME, video_id))
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
@ -2611,79 +2765,53 @@ class SoundcloudIE(InfoExtractor):
return return
# extract uploader (which is in the url) # extract uploader (which is in the url)
uploader = mobj.group(1).decode('utf-8') uploader = mobj.group(1)
# extract simple title (uploader + slug of song title) # extract simple title (uploader + slug of song title)
slug_title = mobj.group(2).decode('utf-8') slug_title = mobj.group(2)
simple_title = uploader + u'-' + slug_title simple_title = uploader + u'-' + slug_title
self.report_webpage('%s/%s' % (uploader, slug_title)) self.report_resolve('%s/%s' % (uploader, slug_title))
request = urllib2.Request('http://soundcloud.com/%s/%s' % (uploader, slug_title)) url = 'http://soundcloud.com/%s/%s' % (uploader, slug_title)
resolv_url = 'http://api.soundcloud.com/resolve.json?url=' + url + '&client_id=b45b1aa10f1ac2941910a7f0d10f8e28'
request = compat_urllib_request.Request(resolv_url)
try: try:
webpage = urllib2.urlopen(request).read() info_json_bytes = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: info_json = info_json_bytes.decode('utf-8')
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err)) except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % compat_str(err))
return return
info = json.loads(info_json)
video_id = info['id']
self.report_extraction('%s/%s' % (uploader, slug_title)) self.report_extraction('%s/%s' % (uploader, slug_title))
# extract uid and stream token that soundcloud hands out for access streams_url = 'https://api.sndcdn.com/i1/tracks/' + str(video_id) + '/streams?client_id=b45b1aa10f1ac2941910a7f0d10f8e28'
mobj = re.search('"uid":"([\w\d]+?)".*?stream_token=([\w\d]+)', webpage) request = compat_urllib_request.Request(streams_url)
if mobj:
video_id = mobj.group(1)
stream_token = mobj.group(2)
# extract unsimplified title
mobj = re.search('"title":"(.*?)",', webpage)
if mobj:
title = mobj.group(1).decode('utf-8')
else:
title = simple_title
# construct media url (with uid/token)
mediaURL = "http://media.soundcloud.com/stream/%s?stream_token=%s"
mediaURL = mediaURL % (video_id, stream_token)
# description
description = u'No description available'
mobj = re.search('track-description-value"><p>(.*?)</p>', webpage)
if mobj:
description = mobj.group(1)
# upload date
upload_date = None
mobj = re.search("pretty-date'>on ([\w]+ [\d]+, [\d]+ \d+:\d+)</abbr></h2>", webpage)
if mobj:
try: try:
upload_date = datetime.datetime.strptime(mobj.group(1), '%B %d, %Y %H:%M').strftime('%Y%m%d') stream_json_bytes = compat_urllib_request.urlopen(request).read()
except Exception, e: stream_json = stream_json_bytes.decode('utf-8')
self._downloader.to_stderr(str(e)) except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download stream definitions: %s' % compat_str(err))
return
# for soundcloud, a request to a cross domain is required for cookies streams = json.loads(stream_json)
request = urllib2.Request('http://media.soundcloud.com/crossdomain.xml', std_headers) mediaURL = streams['http_mp3_128_url']
return [{ return [{
'id': video_id.decode('utf-8'), 'id': info['id'],
'url': mediaURL, 'url': mediaURL,
'uploader': uploader.decode('utf-8'), 'uploader': info['user']['username'],
'upload_date': upload_date, 'upload_date': info['created_at'],
'title': title, 'title': info['title'],
'ext': u'mp3', 'ext': u'mp3',
'format': u'NA', 'description': info['description'],
'player_url': None,
'description': description.decode('utf-8')
}] }]
class InfoQIE(InfoExtractor): class InfoQIE(InfoExtractor):
"""Information extractor for infoq.com""" """Information extractor for infoq.com"""
_VALID_URL = r'^(?:https?://)?(?:www\.)?infoq\.com/[^/]+/[^/]+$' _VALID_URL = r'^(?:https?://)?(?:www\.)?infoq\.com/[^/]+/[^/]+$'
IE_NAME = u'infoq'
def report_webpage(self, video_id):
"""Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, video_id))
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
@ -2695,38 +2823,29 @@ class InfoQIE(InfoExtractor):
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
self.report_webpage(url) webpage = self._download_webpage(url, video_id=url)
request = urllib2.Request(url)
try:
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err))
return
self.report_extraction(url) self.report_extraction(url)
# Extract video URL # Extract video URL
mobj = re.search(r"jsclassref='([^']*)'", webpage) mobj = re.search(r"jsclassref='([^']*)'", webpage)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video url') self._downloader.trouble(u'ERROR: unable to extract video url')
return return
video_url = 'rtmpe://video.infoq.com/cfx/st/' + urllib2.unquote(mobj.group(1).decode('base64')) real_id = compat_urllib_parse.unquote(base64.b64decode(mobj.group(1).encode('ascii')).decode('utf-8'))
video_url = 'rtmpe://video.infoq.com/cfx/st/' + real_id
# Extract title # Extract title
mobj = re.search(r'contentTitle = "(.*?)";', webpage) mobj = re.search(r'contentTitle = "(.*?)";', webpage)
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: unable to extract video title') self._downloader.trouble(u'ERROR: unable to extract video title')
return return
video_title = mobj.group(1).decode('utf-8') video_title = mobj.group(1)
# Extract description # Extract description
video_description = u'No description available.' video_description = u'No description available.'
mobj = re.search(r'<meta name="description" content="(.*)"(?:\s*/)?>', webpage) mobj = re.search(r'<meta name="description" content="(.*)"(?:\s*/)?>', webpage)
if mobj is not None: if mobj is not None:
video_description = mobj.group(1).decode('utf-8') video_description = mobj.group(1)
video_filename = video_url.split('/')[-1] video_filename = video_url.split('/')[-1]
video_id, extension = video_filename.split('.') video_id, extension = video_filename.split('.')
@ -2737,17 +2856,17 @@ class InfoQIE(InfoExtractor):
'uploader': None, 'uploader': None,
'upload_date': None, 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': extension, 'ext': extension, # Extension is always(?) mp4, but seems to be flv
'format': extension, # Extension is always(?) mp4, but seems to be flv
'thumbnail': None, 'thumbnail': None,
'description': video_description, 'description': video_description,
'player_url': None,
} }
return [info] return [info]
class MixcloudIE(InfoExtractor): class MixcloudIE(InfoExtractor):
"""Information extractor for www.mixcloud.com""" """Information extractor for www.mixcloud.com"""
_WORKING = False # New API, but it seems good http://www.mixcloud.com/developers/documentation/
_VALID_URL = r'^(?:https?://)?(?:www\.)?mixcloud\.com/([\w\d-]+)/([\w\d-]+)' _VALID_URL = r'^(?:https?://)?(?:www\.)?mixcloud\.com/([\w\d-]+)/([\w\d-]+)'
IE_NAME = u'mixcloud' IE_NAME = u'mixcloud'
@ -2779,23 +2898,23 @@ class MixcloudIE(InfoExtractor):
"""Returns 1st active url from list""" """Returns 1st active url from list"""
for url in url_list: for url in url_list:
try: try:
urllib2.urlopen(url) compat_urllib_request.urlopen(url)
return url return url
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
url = None url = None
return None return None
def _print_formats(self, formats): def _print_formats(self, formats):
print 'Available formats:' print('Available formats:')
for fmt in formats.keys(): for fmt in formats.keys():
for b in formats[fmt]: for b in formats[fmt]:
try: try:
ext = formats[fmt][b][0] ext = formats[fmt][b][0]
print '%s\t%s\t[%s]' % (fmt, b, ext.split('.')[-1]) print('%s\t%s\t[%s]' % (fmt, b, ext.split('.')[-1]))
except TypeError: # we have no bitrate info except TypeError: # we have no bitrate info
ext = formats[fmt][0] ext = formats[fmt][0]
print '%s\t%s\t[%s]' % (fmt, '??', ext.split('.')[-1]) print('%s\t%s\t[%s]' % (fmt, '??', ext.split('.')[-1]))
break break
def _real_extract(self, url): def _real_extract(self, url):
@ -2810,12 +2929,12 @@ class MixcloudIE(InfoExtractor):
# construct API request # construct API request
file_url = 'http://www.mixcloud.com/api/1/cloudcast/' + '/'.join(url.split('/')[-3:-1]) + '.json' file_url = 'http://www.mixcloud.com/api/1/cloudcast/' + '/'.join(url.split('/')[-3:-1]) + '.json'
# retrieve .json file with links to files # retrieve .json file with links to files
request = urllib2.Request(file_url) request = compat_urllib_request.Request(file_url)
try: try:
self.report_download_json(file_url) self.report_download_json(file_url)
jsonData = urllib2.urlopen(request).read() jsonData = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve file: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve file: %s' % compat_str(err))
return return
# parse JSON # parse JSON
@ -2838,7 +2957,7 @@ class MixcloudIE(InfoExtractor):
if file_url is not None: if file_url is not None:
break # got it! break # got it!
else: else:
if req_format not in formats.keys(): if req_format not in formats:
self._downloader.trouble(u'ERROR: format is not available') self._downloader.trouble(u'ERROR: format is not available')
return return
@ -2850,7 +2969,7 @@ class MixcloudIE(InfoExtractor):
'id': file_id.decode('utf-8'), 'id': file_id.decode('utf-8'),
'url': file_url.decode('utf-8'), 'url': file_url.decode('utf-8'),
'uploader': uploader.decode('utf-8'), 'uploader': uploader.decode('utf-8'),
'upload_date': u'NA', 'upload_date': None,
'title': json_data['name'], 'title': json_data['name'],
'ext': file_url.split('.')[-1].decode('utf-8'), 'ext': file_url.split('.')[-1].decode('utf-8'),
'format': (format_param is None and u'NA' or format_param.decode('utf-8')), 'format': (format_param is None and u'NA' or format_param.decode('utf-8')),
@ -2884,15 +3003,17 @@ class StanfordOpenClassroomIE(InfoExtractor):
video = mobj.group('video') video = mobj.group('video')
info = { info = {
'id': course + '_' + video, 'id': course + '_' + video,
'uploader': None,
'upload_date': None,
} }
self.report_extraction(info['id']) self.report_extraction(info['id'])
baseUrl = 'http://openclassroom.stanford.edu/MainFolder/courses/' + course + '/videos/' baseUrl = 'http://openclassroom.stanford.edu/MainFolder/courses/' + course + '/videos/'
xmlUrl = baseUrl + video + '.xml' xmlUrl = baseUrl + video + '.xml'
try: try:
metaXml = urllib2.urlopen(xmlUrl).read() metaXml = compat_urllib_request.urlopen(xmlUrl).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info XML: %s' % unicode(err)) self._downloader.trouble(u'ERROR: unable to download video info XML: %s' % compat_str(err))
return return
mdoc = xml.etree.ElementTree.fromstring(metaXml) mdoc = xml.etree.ElementTree.fromstring(metaXml)
try: try:
@ -2902,20 +3023,21 @@ class StanfordOpenClassroomIE(InfoExtractor):
self._downloader.trouble(u'\nERROR: Invalid metadata XML file') self._downloader.trouble(u'\nERROR: Invalid metadata XML file')
return return
info['ext'] = info['url'].rpartition('.')[2] info['ext'] = info['url'].rpartition('.')[2]
info['format'] = info['ext']
return [info] return [info]
elif mobj.group('course'): # A course page elif mobj.group('course'): # A course page
course = mobj.group('course') course = mobj.group('course')
info = { info = {
'id': course, 'id': course,
'type': 'playlist', 'type': 'playlist',
'uploader': None,
'upload_date': None,
} }
self.report_download_webpage(info['id']) self.report_download_webpage(info['id'])
try: try:
coursepage = urllib2.urlopen(url).read() coursepage = compat_urllib_request.urlopen(url).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download course info page: ' + unicode(err)) self._downloader.trouble(u'ERROR: unable to download course info page: ' + compat_str(err))
return return
m = re.search('<h1>([^<]+)</h1>', coursepage) m = re.search('<h1>([^<]+)</h1>', coursepage)
@ -2945,14 +3067,16 @@ class StanfordOpenClassroomIE(InfoExtractor):
info = { info = {
'id': 'Stanford OpenClassroom', 'id': 'Stanford OpenClassroom',
'type': 'playlist', 'type': 'playlist',
'uploader': None,
'upload_date': None,
} }
self.report_download_webpage(info['id']) self.report_download_webpage(info['id'])
rootURL = 'http://openclassroom.stanford.edu/MainFolder/HomePage.php' rootURL = 'http://openclassroom.stanford.edu/MainFolder/HomePage.php'
try: try:
rootpage = urllib2.urlopen(rootURL).read() rootpage = compat_urllib_request.urlopen(rootURL).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download course info page: ' + unicode(err)) self._downloader.trouble(u'ERROR: unable to download course info page: ' + compat_str(err))
return return
info['title'] = info['id'] info['title'] = info['id']
@ -2977,10 +3101,6 @@ class MTVIE(InfoExtractor):
_VALID_URL = r'^(?P<proto>https?://)?(?:www\.)?mtv\.com/videos/[^/]+/(?P<videoid>[0-9]+)/[^/]+$' _VALID_URL = r'^(?P<proto>https?://)?(?:www\.)?mtv\.com/videos/[^/]+/(?P<videoid>[0-9]+)/[^/]+$'
IE_NAME = u'mtv' IE_NAME = u'mtv'
def report_webpage(self, video_id):
"""Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, video_id))
def report_extraction(self, video_id): def report_extraction(self, video_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, video_id)) self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, video_id))
@ -2993,14 +3113,8 @@ class MTVIE(InfoExtractor):
if not mobj.group('proto'): if not mobj.group('proto'):
url = 'http://' + url url = 'http://' + url
video_id = mobj.group('videoid') video_id = mobj.group('videoid')
self.report_webpage(video_id)
request = urllib2.Request(url) webpage = self._download_webpage(url, video_id)
try:
webpage = urllib2.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % str(err))
return
mobj = re.search(r'<meta name="mtv_vt" content="([^"]+)"/>', webpage) mobj = re.search(r'<meta name="mtv_vt" content="([^"]+)"/>', webpage)
if mobj is None: if mobj is None:
@ -3028,11 +3142,11 @@ class MTVIE(InfoExtractor):
videogen_url = 'http://www.mtv.com/player/includes/mediaGen.jhtml?uri=' + mtvn_uri + '&id=' + content_id + '&vid=' + video_id + '&ref=www.mtvn.com&viewUri=' + mtvn_uri videogen_url = 'http://www.mtv.com/player/includes/mediaGen.jhtml?uri=' + mtvn_uri + '&id=' + content_id + '&vid=' + video_id + '&ref=www.mtvn.com&viewUri=' + mtvn_uri
self.report_extraction(video_id) self.report_extraction(video_id)
request = urllib2.Request(videogen_url) request = compat_urllib_request.Request(videogen_url)
try: try:
metadataXml = urllib2.urlopen(request).read() metadataXml = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video metadata: %s' % str(err)) self._downloader.trouble(u'ERROR: unable to download video metadata: %s' % compat_str(err))
return return
mdoc = xml.etree.ElementTree.fromstring(metadataXml) mdoc = xml.etree.ElementTree.fromstring(metadataXml)
@ -3053,6 +3167,7 @@ class MTVIE(InfoExtractor):
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'uploader': performer, 'uploader': performer,
'upload_date': None,
'title': video_title, 'title': video_title,
'ext': ext, 'ext': ext,
'format': format, 'format': format,
@ -3062,20 +3177,15 @@ class MTVIE(InfoExtractor):
class YoukuIE(InfoExtractor): class YoukuIE(InfoExtractor):
_VALID_URL = r'(?:http://)?v\.youku\.com/v_show/id_(?P<ID>[A-Za-z0-9]+)\.html' _VALID_URL = r'(?:http://)?v\.youku\.com/v_show/id_(?P<ID>[A-Za-z0-9]+)\.html'
IE_NAME = u'Youku'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_download_webpage(self, file_id): def report_download_webpage(self, file_id):
"""Report webpage download.""" """Report webpage download."""
self._downloader.to_screen(u'[Youku] %s: Downloading webpage' % file_id) self._downloader.to_screen(u'[%s] %s: Downloading webpage' % (self.IE_NAME, file_id))
def report_extraction(self, file_id): def report_extraction(self, file_id):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[Youku] %s: Extracting information' % file_id) self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, file_id))
def _gen_sid(self): def _gen_sid(self):
nowTime = int(time.time() * 1000) nowTime = int(time.time() * 1000)
@ -3114,23 +3224,24 @@ class YoukuIE(InfoExtractor):
info_url = 'http://v.youku.com/player/getPlayList/VideoIDS/' + video_id info_url = 'http://v.youku.com/player/getPlayList/VideoIDS/' + video_id
request = urllib2.Request(info_url, None, std_headers) request = compat_urllib_request.Request(info_url, None, std_headers)
try: try:
self.report_download_webpage(video_id) self.report_download_webpage(video_id)
jsondata = urllib2.urlopen(request).read() jsondata = compat_urllib_request.urlopen(request).read()
except (urllib2.URLError, httplib.HTTPException, socket.error) as err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
self.report_extraction(video_id) self.report_extraction(video_id)
try: try:
config = json.loads(jsondata) jsonstr = jsondata.decode('utf-8')
config = json.loads(jsonstr)
video_title = config['data'][0]['title'] video_title = config['data'][0]['title']
seed = config['data'][0]['seed'] seed = config['data'][0]['seed']
format = self._downloader.params.get('format', None) format = self._downloader.params.get('format', None)
supported_format = config['data'][0]['streamfileids'].keys() supported_format = list(config['data'][0]['streamfileids'].keys())
if format is None or format == 'best': if format is None or format == 'best':
if 'hd2' in supported_format: if 'hd2' in supported_format:
@ -3147,15 +3258,8 @@ class YoukuIE(InfoExtractor):
fileid = config['data'][0]['streamfileids'][format] fileid = config['data'][0]['streamfileids'][format]
seg_number = len(config['data'][0]['segs'][format]) keys = [s['k'] for s in config['data'][0]['segs'][format]]
except (UnicodeDecodeError, ValueError, KeyError):
keys=[]
for i in xrange(seg_number):
keys.append(config['data'][0]['segs'][format][i]['k'])
#TODO check error
#youku only could be viewed from mainland china
except:
self._downloader.trouble(u'ERROR: unable to extract info section') self._downloader.trouble(u'ERROR: unable to extract info section')
return return
@ -3174,9 +3278,9 @@ class YoukuIE(InfoExtractor):
'id': '%s_part%02d' % (video_id, index), 'id': '%s_part%02d' % (video_id, index),
'url': download_url, 'url': download_url,
'uploader': None, 'uploader': None,
'upload_date': None,
'title': video_title, 'title': video_title,
'ext': ext, 'ext': ext,
'format': u'NA'
} }
files_info.append(info) files_info.append(info)
@ -3186,7 +3290,7 @@ class YoukuIE(InfoExtractor):
class XNXXIE(InfoExtractor): class XNXXIE(InfoExtractor):
"""Information extractor for xnxx.com""" """Information extractor for xnxx.com"""
_VALID_URL = r'^(?:https?://)?video\.xnxx\.com/video([0-9]+)/(.*)' _VALID_URL = r'^http://video\.xnxx\.com/video([0-9]+)/(.*)'
IE_NAME = u'xnxx' IE_NAME = u'xnxx'
VIDEO_URL_RE = r'flv_url=(.*?)&amp;' VIDEO_URL_RE = r'flv_url=(.*?)&amp;'
VIDEO_TITLE_RE = r'<title>(.*?)\s+-\s+XNXX.COM' VIDEO_TITLE_RE = r'<title>(.*?)\s+-\s+XNXX.COM'
@ -3205,14 +3309,15 @@ class XNXXIE(InfoExtractor):
if mobj is None: if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
video_id = mobj.group(1).decode('utf-8') video_id = mobj.group(1)
self.report_webpage(video_id) self.report_webpage(video_id)
# Get webpage content # Get webpage content
try: try:
webpage = urllib2.urlopen(url).read() webpage_bytes = compat_urllib_request.urlopen(url).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: webpage = webpage_bytes.decode('utf-8')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % err) self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % err)
return return
@ -3220,38 +3325,36 @@ class XNXXIE(InfoExtractor):
if result is None: if result is None:
self._downloader.trouble(u'ERROR: unable to extract video url') self._downloader.trouble(u'ERROR: unable to extract video url')
return return
video_url = urllib.unquote(result.group(1).decode('utf-8')) video_url = compat_urllib_parse.unquote(result.group(1))
result = re.search(self.VIDEO_TITLE_RE, webpage) result = re.search(self.VIDEO_TITLE_RE, webpage)
if result is None: if result is None:
self._downloader.trouble(u'ERROR: unable to extract video title') self._downloader.trouble(u'ERROR: unable to extract video title')
return return
video_title = result.group(1).decode('utf-8') video_title = result.group(1)
result = re.search(self.VIDEO_THUMB_RE, webpage) result = re.search(self.VIDEO_THUMB_RE, webpage)
if result is None: if result is None:
self._downloader.trouble(u'ERROR: unable to extract video thumbnail') self._downloader.trouble(u'ERROR: unable to extract video thumbnail')
return return
video_thumbnail = result.group(1).decode('utf-8') video_thumbnail = result.group(1)
info = {'id': video_id, return [{
'id': video_id,
'url': video_url, 'url': video_url,
'uploader': None, 'uploader': None,
'upload_date': None, 'upload_date': None,
'title': video_title, 'title': video_title,
'ext': 'flv', 'ext': 'flv',
'format': 'flv',
'thumbnail': video_thumbnail, 'thumbnail': video_thumbnail,
'description': None, 'description': None,
'player_url': None} }]
return [info]
class GooglePlusIE(InfoExtractor): class GooglePlusIE(InfoExtractor):
"""Information extractor for plus.google.com.""" """Information extractor for plus.google.com."""
_VALID_URL = r'(?:https://)?plus\.google\.com/(?:\w+/)*?(\d+)/posts/(\w+)' _VALID_URL = r'(?:https://)?plus\.google\.com/(?:[^/]+/)*?posts/(\w+)'
IE_NAME = u'plus.google' IE_NAME = u'plus.google'
def __init__(self, downloader=None): def __init__(self, downloader=None):
@ -3259,7 +3362,7 @@ class GooglePlusIE(InfoExtractor):
def report_extract_entry(self, url): def report_extract_entry(self, url):
"""Report downloading extry""" """Report downloading extry"""
self._downloader.to_screen(u'[plus.google] Downloading entry: %s' % url.decode('utf-8')) self._downloader.to_screen(u'[plus.google] Downloading entry: %s' % url)
def report_date(self, upload_date): def report_date(self, upload_date):
"""Report downloading extry""" """Report downloading extry"""
@ -3267,15 +3370,15 @@ class GooglePlusIE(InfoExtractor):
def report_uploader(self, uploader): def report_uploader(self, uploader):
"""Report downloading extry""" """Report downloading extry"""
self._downloader.to_screen(u'[plus.google] Uploader: %s' % uploader.decode('utf-8')) self._downloader.to_screen(u'[plus.google] Uploader: %s' % uploader)
def report_title(self, video_title): def report_title(self, video_title):
"""Report downloading extry""" """Report downloading extry"""
self._downloader.to_screen(u'[plus.google] Title: %s' % video_title.decode('utf-8')) self._downloader.to_screen(u'[plus.google] Title: %s' % video_title)
def report_extract_vid_page(self, video_page): def report_extract_vid_page(self, video_page):
"""Report information extraction.""" """Report information extraction."""
self._downloader.to_screen(u'[plus.google] Extracting video page: %s' % video_page.decode('utf-8')) self._downloader.to_screen(u'[plus.google] Extracting video page: %s' % video_page)
def _real_extract(self, url): def _real_extract(self, url):
# Extract id from URL # Extract id from URL
@ -3285,21 +3388,21 @@ class GooglePlusIE(InfoExtractor):
return return
post_url = mobj.group(0) post_url = mobj.group(0)
video_id = mobj.group(2) video_id = mobj.group(1)
video_extension = 'flv' video_extension = 'flv'
# Step 1, Retrieve post webpage to extract further information # Step 1, Retrieve post webpage to extract further information
self.report_extract_entry(post_url) self.report_extract_entry(post_url)
request = urllib2.Request(post_url) request = compat_urllib_request.Request(post_url)
try: try:
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve entry webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve entry webpage: %s' % compat_str(err))
return return
# Extract update date # Extract update date
upload_date = u'NA' upload_date = None
pattern = 'title="Timestamp">(.*?)</a>' pattern = 'title="Timestamp">(.*?)</a>'
mobj = re.search(pattern, webpage) mobj = re.search(pattern, webpage)
if mobj: if mobj:
@ -3310,7 +3413,7 @@ class GooglePlusIE(InfoExtractor):
self.report_date(upload_date) self.report_date(upload_date)
# Extract uploader # Extract uploader
uploader = u'NA' uploader = None
pattern = r'rel\="author".*?>(.*?)</a>' pattern = r'rel\="author".*?>(.*?)</a>'
mobj = re.search(pattern, webpage) mobj = re.search(pattern, webpage)
if mobj: if mobj:
@ -3333,11 +3436,11 @@ class GooglePlusIE(InfoExtractor):
self._downloader.trouble(u'ERROR: unable to extract video page URL') self._downloader.trouble(u'ERROR: unable to extract video page URL')
video_page = mobj.group(1) video_page = mobj.group(1)
request = urllib2.Request(video_page) request = compat_urllib_request.Request(video_page)
try: try:
webpage = urllib2.urlopen(request).read() webpage = compat_urllib_request.urlopen(request).read().decode('utf-8')
except (urllib2.URLError, httplib.HTTPException, socket.error), err: except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % str(err)) self._downloader.trouble(u'ERROR: Unable to retrieve video webpage: %s' % compat_str(err))
return return
self.report_extract_vid_page(video_page) self.report_extract_vid_page(video_page)
@ -3357,209 +3460,24 @@ class GooglePlusIE(InfoExtractor):
# Only get the url. The resolution part in the tuple has no use anymore # Only get the url. The resolution part in the tuple has no use anymore
video_url = video_url[-1] video_url = video_url[-1]
# Treat escaped \u0026 style hex # Treat escaped \u0026 style hex
video_url = unicode(video_url, "unicode_escape") try:
video_url = video_url.decode("unicode_escape")
except AttributeError: # Python 3
video_url = bytes(video_url, 'ascii').decode('unicode-escape')
return [{ return [{
'id': video_id.decode('utf-8'),
'url': video_url,
'uploader': uploader.decode('utf-8'),
'upload_date': upload_date.decode('utf-8'),
'title': video_title.decode('utf-8'),
'ext': video_extension.decode('utf-8'),
'format': u'NA',
'player_url': None,
}]
class YouPornIE(InfoExtractor):
"""Information extractor for youporn.com."""
_VALID_URL = r'^(?:https?://)?(?:\w+\.)?youporn\.com/watch/(?P<videoid>[0-9]+)/(?P<title>[^/]+)'
IE_NAME = u'youporn'
VIDEO_TITLE_RE = r'videoTitleArea">(?P<title>.*)</h1>'
VIDEO_DATE_RE = r'Date:</b>(?P<date>.*)</li>'
VIDEO_UPLOADER_RE = r'Submitted:</b>(?P<uploader>.*)</li>'
DOWNLOAD_LIST_RE = r'(?s)<ul class="downloadList">(?P<download_list>.*?)</ul>'
LINK_RE = r'(?s)<a href="(?P<url>[^"]+)">'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_id(self, video_id):
"""Report finding video ID"""
self._downloader.to_screen(u'[youporn] Video ID: %s' % video_id)
def report_webpage(self, url):
"""Report downloading page"""
self._downloader.to_screen(u'[youporn] Downloaded page: %s' % url)
def report_title(self, video_title):
"""Report dfinding title"""
self._downloader.to_screen(u'[youporn] Title: %s' % video_title)
def report_uploader(self, uploader):
"""Report dfinding title"""
self._downloader.to_screen(u'[youporn] Uploader: %s' % uploader)
def report_upload_date(self, video_date):
"""Report finding date"""
self._downloader.to_screen(u'[youporn] Date: %s' % video_date)
def _print_formats(self, formats):
"""Print all available formats"""
print 'Available formats:'
print u'ext\t\tformat'
print u'---------------------------------'
for format in formats:
print u'%s\t\t%s' % (format['ext'], format['format'])
def _specific(self, req_format, formats):
for x in formats:
if(x["format"]==req_format):
return x
return None
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return
video_id = mobj.group('videoid').decode('utf-8')
self.report_id(video_id)
# Get webpage content
try:
webpage = urllib2.urlopen(url).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % err)
return
self.report_webpage(url)
# Get the video title
result = re.search(self.VIDEO_TITLE_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract video title')
return
video_title = result.group('title').decode('utf-8').strip()
self.report_title(video_title)
# Get the video date
result = re.search(self.VIDEO_DATE_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract video date')
return
upload_date = result.group('date').decode('utf-8').strip()
self.report_upload_date(upload_date)
# Get the video uploader
result = re.search(self.VIDEO_UPLOADER_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract uploader')
return
video_uploader = result.group('uploader').decode('utf-8').strip()
video_uploader = clean_html( video_uploader )
self.report_uploader(video_uploader)
# Get all of the formats available
result = re.search(self.DOWNLOAD_LIST_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract download list')
return
download_list_html = result.group('download_list').decode('utf-8').strip()
# Get all of the links from the page
links = re.findall(self.LINK_RE, download_list_html)
if(len(links) == 0):
self._downloader.trouble(u'ERROR: no known formats available for video')
return
self._downloader.to_screen(u'[youporn] Links found: %d' % len(links))
formats = []
for link in links:
# A link looks like this:
# http://cdn1.download.youporn.phncdn.com/201210/31/8004515/480p_370k_8004515/YouPorn%20-%20Nubile%20Films%20The%20Pillow%20Fight.mp4?nvb=20121113051249&nva=20121114051249&ir=1200&sr=1200&hash=014b882080310e95fb6a0
# A path looks like this:
# /201210/31/8004515/480p_370k_8004515/YouPorn%20-%20Nubile%20Films%20The%20Pillow%20Fight.mp4
video_url = unescapeHTML( link.decode('utf-8') )
path = urlparse( video_url ).path
extension = os.path.splitext( path )[1][1:]
format = path.split('/')[4].split('_')[:2]
size = format[0]
bitrate = format[1]
format = "-".join( format )
title = u'%s-%s-%s' % (video_title, size, bitrate)
formats.append({
'id': video_id, 'id': video_id,
'url': video_url, 'url': video_url,
'uploader': video_uploader, 'uploader': uploader,
'upload_date': upload_date, 'upload_date': upload_date,
'title': title, 'title': video_title,
'ext': extension, 'ext': video_extension,
'format': format, }]
'thumbnail': None,
'description': None,
'player_url': None
})
if self._downloader.params.get('listformats', None): class NBAIE(InfoExtractor):
self._print_formats(formats) _VALID_URL = r'^(?:https?://)?(?:watch\.|www\.)?nba\.com/(?:nba/)?video(/[^?]*)(\?.*)?$'
return IE_NAME = u'nba'
req_format = self._downloader.params.get('format', None)
#format_limit = self._downloader.params.get('format_limit', None)
self._downloader.to_screen(u'[youporn] Format: %s' % req_format)
if req_format is None or req_format == 'best':
return [formats[0]]
elif req_format == 'worst':
return [formats[-1]]
elif req_format in ('-1', 'all'):
return formats
else:
format = self._specific( req_format, formats )
if result is None:
self._downloader.trouble(u'ERROR: requested format not available')
return
return [format]
class PornotubeIE(InfoExtractor):
"""Information extractor for pornotube.com."""
_VALID_URL = r'^(?:https?://)?(?:\w+\.)?pornotube\.com(/c/(?P<channel>[0-9]+))?(/m/(?P<videoid>[0-9]+))(/(?P<title>.+))$'
IE_NAME = u'pornotube'
VIDEO_URL_RE = r'url: "(?P<url>http://video[0-9].pornotube.com/.+\.flv)",'
VIDEO_UPLOADED_RE = r'<div class="video_added_by">Added (?P<date>[0-9\/]+) by'
def __init__(self, downloader=None):
InfoExtractor.__init__(self, downloader)
def report_extract_entry(self, url):
"""Report downloading extry"""
self._downloader.to_screen(u'[pornotube] Downloading entry: %s' % url.decode('utf-8'))
def report_date(self, upload_date):
"""Report finding uploaded date"""
self._downloader.to_screen(u'[pornotube] Entry date: %s' % upload_date)
def report_webpage(self, url):
"""Report downloading page"""
self._downloader.to_screen(u'[pornotube] Downloaded page: %s' % url)
def report_title(self, video_title):
"""Report downloading extry"""
self._downloader.to_screen(u'[pornotube] Title: %s' % video_title.decode('utf-8'))
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
@ -3567,130 +3485,298 @@ class PornotubeIE(InfoExtractor):
self._downloader.trouble(u'ERROR: invalid URL: %s' % url) self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
video_id = mobj.group('videoid').decode('utf-8') video_id = mobj.group(1)
video_title = mobj.group('title').decode('utf-8') if video_id.endswith('/index.html'):
self.report_title(video_title); video_id = video_id[:-len('/index.html')]
# Get webpage content webpage = self._download_webpage(url, video_id)
try:
webpage = urllib2.urlopen(url).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % err)
return
self.report_webpage(url)
# Get the video URL video_url = u'http://ht-mobile.cdn.turner.com/nba/big' + video_id + '_nba_1280x720.mp4'
result = re.search(self.VIDEO_URL_RE, webpage) def _findProp(rexp, default=None):
if result is None: m = re.search(rexp, webpage)
self._downloader.trouble(u'ERROR: unable to extract video url') if m:
return return unescapeHTML(m.group(1))
video_url = urllib.unquote(result.group('url').decode('utf-8')) else:
self.report_extract_entry(video_url) return default
#Get the uploaded date shortened_video_id = video_id.rpartition('/')[2]
result = re.search(self.VIDEO_UPLOADED_RE, webpage) title = _findProp(r'<meta property="og:title" content="(.*?)"', shortened_video_id).replace('NBA.com: ', '')
if result is None: info = {
self._downloader.trouble(u'ERROR: unable to extract video title') 'id': shortened_video_id,
return
upload_date = result.group('date').decode('utf-8')
self.report_date(upload_date);
info = {'id': video_id,
'url': video_url, 'url': video_url,
'uploader': None, 'ext': 'mp4',
'upload_date': upload_date, 'title': title,
'title': video_title, 'uploader_date': _findProp(r'<b>Date:</b> (.*?)</div>'),
'ext': 'flv', 'description': _findProp(r'<div class="description">(.*?)</h1>'),
'format': 'flv', }
'thumbnail': None,
'description': None,
'player_url': None}
return [info] return [info]
class JustinTVIE(InfoExtractor):
"""Information extractor for justin.tv and twitch.tv"""
# TODO: One broadcast may be split into multiple videos. The key
# 'broadcast_id' is the same for all parts, and 'broadcast_part'
# starts at 1 and increases. Can we treat all parts as one video?
_VALID_URL = r"""(?x)^(?:http://)?(?:www\.)?(?:twitch|justin)\.tv/
([^/]+)(?:/b/([^/]+))?/?(?:\#.*)?$"""
_JUSTIN_PAGE_LIMIT = 100
IE_NAME = u'justin.tv'
def report_extraction(self, file_id):
"""Report information extraction."""
self._downloader.to_screen(u'[%s] %s: Extracting information' % (self.IE_NAME, file_id))
class YouJizzIE(InfoExtractor): def report_download_page(self, channel, offset):
"""Information extractor for youjizz.com.""" """Report attempt to download a single page of videos."""
self._downloader.to_screen(u'[%s] %s: Downloading video information from %d to %d' %
(self.IE_NAME, channel, offset, offset + self._JUSTIN_PAGE_LIMIT))
_VALID_URL = r'^(?:https?://)?(?:\w+\.)?youjizz\.com/videos/([^.]+).html$' # Return count of items, list of *valid* items
IE_NAME = u'youjizz' def _parse_page(self, url):
VIDEO_TITLE_RE = r'<title>(?P<title>.*)</title>' try:
EMBED_PAGE_RE = r'http://www.youjizz.com/videos/embed/(?P<videoid>[0-9]+)' urlh = compat_urllib_request.urlopen(url)
SOURCE_RE = r'so.addVariable\("file",encodeURIComponent\("(?P<source>[^"]+)"\)\);' webpage_bytes = urlh.read()
webpage = webpage_bytes.decode('utf-8', 'ignore')
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self._downloader.trouble(u'ERROR: unable to download video info JSON: %s' % compat_str(err))
return
def __init__(self, downloader=None): response = json.loads(webpage)
InfoExtractor.__init__(self, downloader) info = []
for clip in response:
def report_extract_entry(self, url): video_url = clip['video_file_url']
"""Report downloading extry""" if video_url:
self._downloader.to_screen(u'[youjizz] Downloading entry: %s' % url.decode('utf-8')) video_extension = os.path.splitext(video_url)[1][1:]
video_date = re.sub('-', '', clip['created_on'][:10])
def report_webpage(self, url): info.append({
"""Report downloading page""" 'id': clip['id'],
self._downloader.to_screen(u'[youjizz] Downloaded page: %s' % url) 'url': video_url,
'title': clip['title'],
def report_title(self, video_title): 'uploader': clip.get('user_id', clip.get('channel_id')),
"""Report downloading extry""" 'upload_date': video_date,
self._downloader.to_screen(u'[youjizz] Title: %s' % video_title.decode('utf-8')) 'ext': video_extension,
})
def report_embed_page(self, embed_page): return (len(response), info)
"""Report downloading extry"""
self._downloader.to_screen(u'[youjizz] Embed Page: %s' % embed_page.decode('utf-8'))
def _real_extract(self, url): def _real_extract(self, url):
# Get webpage content mobj = re.match(self._VALID_URL, url)
try: if mobj is None:
webpage = urllib2.urlopen(url).read() self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
self._downloader.trouble(u'ERROR: unable to download video webpage: %s' % err)
return
self.report_webpage(url)
# Get the video title
result = re.search(self.VIDEO_TITLE_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract video title')
return
video_title = result.group('title').decode('utf-8').strip()
self.report_title(video_title)
# Get the embed page
result = re.search(self.EMBED_PAGE_RE, webpage)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract embed page')
return return
embed_page_url = result.group(0).decode('utf-8').strip() api = 'http://api.justin.tv'
video_id = result.group('videoid').decode('utf-8') video_id = mobj.group(mobj.lastindex)
self.report_embed_page(embed_page_url) paged = False
if mobj.lastindex == 1:
paged = True
api += '/channel/archives/%s.json'
else:
api += '/clip/show/%s.json'
api = api % (video_id,)
try: self.report_extraction(video_id)
webpage = urllib2.urlopen(embed_page_url).read()
except (urllib2.URLError, httplib.HTTPException, socket.error), err: info = []
self._downloader.trouble(u'ERROR: unable to download video embed page: %s' % err) offset = 0
limit = self._JUSTIN_PAGE_LIMIT
while True:
if paged:
self.report_download_page(video_id, offset)
page_url = api + ('?offset=%d&limit=%d' % (offset, limit))
page_count, page_info = self._parse_page(page_url)
info.extend(page_info)
if not paged or page_count != limit:
break
offset += limit
return info
class FunnyOrDieIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?funnyordie\.com/videos/(?P<id>[0-9a-f]+)/.*$'
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return return
# Get the video URL video_id = mobj.group('id')
result = re.search(self.SOURCE_RE, webpage) webpage = self._download_webpage(url, video_id)
if result is None:
self._downloader.trouble(u'ERROR: unable to extract video url')
return
video_url = result.group('source').decode('utf-8')
self.report_extract_entry(video_url)
info = {'id': video_id, m = re.search(r'<video[^>]*>\s*<source[^>]*>\s*<source src="(?P<url>[^"]+)"', webpage, re.DOTALL)
if not m:
self._downloader.trouble(u'ERROR: unable to find video information')
video_url = unescapeHTML(m.group('url'))
m = re.search(r"class='player_page_h1'>\s+<a.*?>(?P<title>.*?)</a>", webpage)
if not m:
self._downloader.trouble(u'Cannot find video title')
title = unescapeHTML(m.group('title'))
m = re.search(r'<meta property="og:description" content="(?P<desc>.*?)"', webpage)
if m:
desc = unescapeHTML(m.group('desc'))
else:
desc = None
info = {
'id': video_id,
'url': video_url, 'url': video_url,
'uploader': None, 'ext': 'mp4',
'upload_date': None, 'title': title,
'title': video_title, 'description': desc,
'ext': 'flv', }
'format': 'flv',
'thumbnail': None,
'description': None,
'player_url': embed_page_url}
return [info] return [info]
class TweetReelIE(InfoExtractor):
_VALID_URL = r'^(?:https?://)?(?:www\.)?tweetreel\.com/[?](?P<id>[0-9a-z]+)$'
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
if mobj is None:
self._downloader.trouble(u'ERROR: invalid URL: %s' % url)
return
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id)
m = re.search(r'<div id="left" status_id="([0-9]+)">', webpage)
if not m:
self._downloader.trouble(u'ERROR: Cannot find status ID')
status_id = m.group(1)
m = re.search(r'<div class="tweet_text">(.*?)</div>', webpage, flags=re.DOTALL)
if not m:
self._downloader.trouble(u'WARNING: Cannot find description')
desc = unescapeHTML(re.sub('<a.*?</a>', '', m.group(1))).strip()
m = re.search(r'<div class="tweet_info">.*?from <a target="_blank" href="https?://twitter.com/(?P<uploader_id>.+?)">(?P<uploader>.+?)</a>', webpage, flags=re.DOTALL)
if not m:
self._downloader.trouble(u'ERROR: Cannot find uploader')
uploader = unescapeHTML(m.group('uploader'))
uploader_id = unescapeHTML(m.group('uploader_id'))
m = re.search(r'<span unixtime="([0-9]+)"', webpage)
if not m:
self._downloader.trouble(u'ERROR: Cannot find upload date')
upload_date = datetime.datetime.fromtimestamp(int(m.group(1))).strftime('%Y%m%d')
title = desc
video_url = 'http://files.tweetreel.com/video/' + status_id + '.mov'
info = {
'id': video_id,
'url': video_url,
'ext': 'mov',
'title': title,
'description': desc,
'uploader': uploader,
'uploader_id': uploader_id,
'internal_id': status_id,
'upload_date': upload_date
}
return [info]
class SteamIE(InfoExtractor):
_VALID_URL = r"""http://store.steampowered.com/
(?P<urltype>video|app)/ #If the page is only for videos or for a game
(?P<gameID>\d+)/?
(?P<videoID>\d*)(?P<extra>\??) #For urltype == video we sometimes get the videoID
"""
def suitable(self, url):
"""Receives a URL and returns True if suitable for this IE."""
return re.match(self._VALID_URL, url, re.VERBOSE) is not None
def _real_extract(self, url):
m = re.match(self._VALID_URL, url, re.VERBOSE)
urlRE = r"'movie_(?P<videoID>\d+)': \{\s*FILENAME: \"(?P<videoURL>[\w:/\.\?=]+)\"(,\s*MOVIE_NAME: \"(?P<videoName>[\w:/\.\?=\+-]+)\")?\s*\},"
gameID = m.group('gameID')
videourl = 'http://store.steampowered.com/video/%s/' % gameID
webpage = self._download_webpage(videourl, gameID)
mweb = re.finditer(urlRE, webpage)
namesRE = r'<span class="title">(?P<videoName>.+?)</span>'
titles = re.finditer(namesRE, webpage)
videos = []
for vid,vtitle in zip(mweb,titles):
video_id = vid.group('videoID')
title = vtitle.group('videoName')
video_url = vid.group('videoURL')
if not video_url:
self._downloader.trouble(u'ERROR: Cannot find video url for %s' % video_id)
info = {
'id':video_id,
'url':video_url,
'ext': 'flv',
'title': unescapeHTML(title)
}
videos.append(info)
return videos
class UstreamIE(InfoExtractor):
_VALID_URL = r'http://www.ustream.tv/recorded/(?P<videoID>\d+)'
IE_NAME = u'ustream'
def _real_extract(self, url):
m = re.match(self._VALID_URL, url)
video_id = m.group('videoID')
video_url = u'http://tcdn.ustream.tv/video/%s' % video_id
webpage = self._download_webpage(url, video_id)
m = re.search(r'data-title="(?P<title>.+)"',webpage)
title = m.group('title')
m = re.search(r'<a class="state" data-content-type="channel" data-content-id="(?P<uploader>\d+)"',webpage)
uploader = m.group('uploader')
info = {
'id':video_id,
'url':video_url,
'ext': 'flv',
'title': title,
'uploader': uploader
}
return [info]
def gen_extractors():
""" Return a list of an instance of every supported extractor.
The order does matter; the first extractor matched is the one handling the URL.
"""
return [
YoutubePlaylistIE(),
YoutubeChannelIE(),
YoutubeUserIE(),
YoutubeSearchIE(),
YoutubeIE(),
MetacafeIE(),
DailymotionIE(),
GoogleSearchIE(),
PhotobucketIE(),
YahooIE(),
YahooSearchIE(),
DepositFilesIE(),
FacebookIE(),
BlipTVUserIE(),
BlipTVIE(),
VimeoIE(),
MyVideoIE(),
ComedyCentralIE(),
EscapistIE(),
CollegeHumorIE(),
XVideosIE(),
SoundcloudIE(),
InfoQIE(),
MixcloudIE(),
StanfordOpenClassroomIE(),
MTVIE(),
YoukuIE(),
XNXXIE(),
GooglePlusIE(),
ArteTvIE(),
NBAIE(),
JustinTVIE(),
FunnyOrDieIE(),
TweetReelIE(),
SteamIE(),
UstreamIE(),
GenericIE()
]

View File

@ -1,12 +1,14 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
import os import os
import subprocess import subprocess
import sys import sys
import time import time
from utils import * from .utils import *
class PostProcessor(object): class PostProcessor(object):
@ -60,13 +62,14 @@ class AudioConversionError(BaseException):
self.message = message self.message = message
class FFmpegExtractAudioPP(PostProcessor): class FFmpegExtractAudioPP(PostProcessor):
def __init__(self, downloader=None, preferredcodec=None, preferredquality=None, keepvideo=False): def __init__(self, downloader=None, preferredcodec=None, preferredquality=None, keepvideo=False, nopostoverwrites=False):
PostProcessor.__init__(self, downloader) PostProcessor.__init__(self, downloader)
if preferredcodec is None: if preferredcodec is None:
preferredcodec = 'best' preferredcodec = 'best'
self._preferredcodec = preferredcodec self._preferredcodec = preferredcodec
self._preferredquality = preferredquality self._preferredquality = preferredquality
self._keepvideo = keepvideo self._keepvideo = keepvideo
self._nopostoverwrites = nopostoverwrites
self._exes = self.detect_executables() self._exes = self.detect_executables()
@staticmethod @staticmethod
@ -84,14 +87,14 @@ class FFmpegExtractAudioPP(PostProcessor):
if not self._exes['ffprobe'] and not self._exes['avprobe']: return None if not self._exes['ffprobe'] and not self._exes['avprobe']: return None
try: try:
cmd = [self._exes['avprobe'] or self._exes['ffprobe'], '-show_streams', '--', encodeFilename(path)] cmd = [self._exes['avprobe'] or self._exes['ffprobe'], '-show_streams', '--', encodeFilename(path)]
handle = subprocess.Popen(cmd, stderr=file(os.path.devnull, 'w'), stdout=subprocess.PIPE) handle = subprocess.Popen(cmd, stderr=compat_subprocess_get_DEVNULL(), stdout=subprocess.PIPE)
output = handle.communicate()[0] output = handle.communicate()[0]
if handle.wait() != 0: if handle.wait() != 0:
return None return None
except (IOError, OSError): except (IOError, OSError):
return None return None
audio_codec = None audio_codec = None
for line in output.split('\n'): for line in output.decode('ascii', 'ignore').split('\n'):
if line.startswith('codec_name='): if line.startswith('codec_name='):
audio_codec = line.split('=')[1].strip() audio_codec = line.split('=')[1].strip()
elif line.strip() == 'codec_type=audio' and audio_codec is not None: elif line.strip() == 'codec_type=audio' and audio_codec is not None:
@ -169,8 +172,11 @@ class FFmpegExtractAudioPP(PostProcessor):
prefix, sep, ext = path.rpartition(u'.') # not os.path.splitext, since the latter does not work on unicode in all setups prefix, sep, ext = path.rpartition(u'.') # not os.path.splitext, since the latter does not work on unicode in all setups
new_path = prefix + sep + extension new_path = prefix + sep + extension
self._downloader.to_screen(u'[' + (self._exes['avconv'] and 'avconv' or 'ffmpeg') + '] Destination: ' + new_path)
try: try:
if self._nopostoverwrites and os.path.exists(encodeFilename(new_path)):
self._downloader.to_screen(u'[youtube] Post-process file %s exists, skipping' % new_path)
else:
self._downloader.to_screen(u'[' + (self._exes['avconv'] and 'avconv' or 'ffmpeg') + '] Destination: ' + new_path)
self.run_ffmpeg(path, new_path, acodec, more_opts) self.run_ffmpeg(path, new_path, acodec, more_opts)
except: except:
etype,e,tb = sys.exc_info() etype,e,tb = sys.exc_info()

View File

@ -2,6 +2,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import with_statement from __future__ import with_statement
from __future__ import absolute_import
__authors__ = ( __authors__ = (
'Ricardo Garcia Gonzalez', 'Ricardo Garcia Gonzalez',
@ -18,17 +19,13 @@ __authors__ = (
'Ori Avtalion', 'Ori Avtalion',
'shizeeg', 'shizeeg',
'Filippo Valsorda', 'Filippo Valsorda',
'Christian Albrecht',
'Dave Vasilevsky',
'Jaime Marquínez Ferrándiz',
) )
__license__ = 'Public Domain' __license__ = 'Public Domain'
__version__ = '2012.10.09'
UPDATE_URL = 'https://raw.github.com/rg3/youtube-dl/master/youtube-dl'
UPDATE_URL_VERSION = 'https://raw.github.com/rg3/youtube-dl/master/LATEST_VERSION'
UPDATE_URL_EXE = 'https://raw.github.com/rg3/youtube-dl/master/youtube-dl.exe'
import cookielib
import getpass import getpass
import optparse import optparse
import os import os
@ -37,77 +34,15 @@ import shlex
import socket import socket
import subprocess import subprocess
import sys import sys
import urllib2
import warnings import warnings
import platform
from utils import * from .utils import *
from FileDownloader import * from .update import update_self
from InfoExtractors import * from .version import __version__
from PostProcessor import * from .FileDownloader import *
from .InfoExtractors import gen_extractors
def updateSelf(downloader, filename): from .PostProcessor import *
''' Update the program file with the latest version from the repository '''
# Note: downloader only used for options
if not os.access(filename, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % filename)
downloader.to_screen(u'Updating to latest version...')
urlv = urllib2.urlopen(UPDATE_URL_VERSION)
newversion = urlv.read().strip()
if newversion == __version__:
downloader.to_screen(u'youtube-dl is up-to-date (' + __version__ + ')')
return
urlv.close()
if hasattr(sys, "frozen"): #py2exe
exe = os.path.abspath(filename)
directory = os.path.dirname(exe)
if not os.access(directory, os.W_OK):
sys.exit('ERROR: no write permissions on %s' % directory)
try:
urlh = urllib2.urlopen(UPDATE_URL_EXE)
newcontent = urlh.read()
urlh.close()
with open(exe + '.new', 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError), err:
sys.exit('ERROR: unable to download latest version')
try:
bat = os.path.join(directory, 'youtube-dl-updater.bat')
b = open(bat, 'w')
print >> b, """
echo Updating youtube-dl...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "%s.new" "%s"
del "%s"
""" %(exe, exe, bat)
b.close()
os.startfile(bat)
except (IOError, OSError), err:
sys.exit('ERROR: unable to overwrite current version')
else:
try:
urlh = urllib2.urlopen(UPDATE_URL)
newcontent = urlh.read()
urlh.close()
except (IOError, OSError), err:
sys.exit('ERROR: unable to download latest version')
try:
with open(filename, 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError), err:
sys.exit('ERROR: unable to overwrite current version')
downloader.to_screen(u'Updated youtube-dl. Restart youtube-dl to use the new version.')
def parseOpts(): def parseOpts():
def _readOptions(filename_bytes): def _readOptions(filename_bytes):
@ -128,9 +63,12 @@ def parseOpts():
opts = [] opts = []
if option._short_opts: opts.append(option._short_opts[0]) if option._short_opts:
if option._long_opts: opts.append(option._long_opts[0]) opts.append(option._short_opts[0])
if len(opts) > 1: opts.insert(1, ', ') if option._long_opts:
opts.append(option._long_opts[0])
if len(opts) > 1:
opts.insert(1, ', ')
if option.takes_value(): opts.append(' %s' % option.metavar) if option.takes_value(): opts.append(' %s' % option.metavar)
@ -189,6 +127,11 @@ def parseOpts():
dest='ratelimit', metavar='LIMIT', help='download rate limit (e.g. 50k or 44.6m)') dest='ratelimit', metavar='LIMIT', help='download rate limit (e.g. 50k or 44.6m)')
general.add_option('-R', '--retries', general.add_option('-R', '--retries',
dest='retries', metavar='RETRIES', help='number of retries (default is %default)', default=10) dest='retries', metavar='RETRIES', help='number of retries (default is %default)', default=10)
general.add_option('--buffer-size',
dest='buffersize', metavar='SIZE', help='size of download buffer (e.g. 1024 or 16k) (default is %default)', default="1024")
general.add_option('--no-resize-buffer',
action='store_true', dest='noresizebuffer',
help='do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE.', default=False)
general.add_option('--dump-user-agent', general.add_option('--dump-user-agent',
action='store_true', dest='dump_user_agent', action='store_true', dest='dump_user_agent',
help='display the current browser identification', default=False) help='display the current browser identification', default=False)
@ -197,6 +140,7 @@ def parseOpts():
general.add_option('--list-extractors', general.add_option('--list-extractors',
action='store_true', dest='list_extractors', action='store_true', dest='list_extractors',
help='List all supported extractors and the URLs they would handle', default=False) help='List all supported extractors and the URLs they would handle', default=False)
general.add_option('--test', action='store_true', dest='test', default=False, help=optparse.SUPPRESS_HELP)
selection.add_option('--playlist-start', selection.add_option('--playlist-start',
dest='playliststart', metavar='NUMBER', help='playlist video to start at (default is %default)', default=1) dest='playliststart', metavar='NUMBER', help='playlist video to start at (default is %default)', default=1)
@ -268,12 +212,15 @@ def parseOpts():
filesystem.add_option('--id', filesystem.add_option('--id',
action='store_true', dest='useid', help='use video ID in file name', default=False) action='store_true', dest='useid', help='use video ID in file name', default=False)
filesystem.add_option('-l', '--literal', filesystem.add_option('-l', '--literal',
action='store_true', dest='useliteral', help='use literal title in file name', default=False) action='store_true', dest='usetitle', help='[deprecated] alias of --title', default=False)
filesystem.add_option('-A', '--auto-number', filesystem.add_option('-A', '--auto-number',
action='store_true', dest='autonumber', action='store_true', dest='autonumber',
help='number downloaded files starting from 00000', default=False) help='number downloaded files starting from 00000', default=False)
filesystem.add_option('-o', '--output', filesystem.add_option('-o', '--output',
dest='outtmpl', metavar='TEMPLATE', help='output filename template. Use %(stitle)s to get the title, %(uploader)s for the uploader name, %(autonumber)s to get an automatically incremented number, %(ext)s for the filename extension, %(upload_date)s for the upload date (YYYYMMDD), %(extractor)s for the provider (youtube, metacafe, etc), %(id)s for the video id and %% for a literal percent. Use - to output to stdout.') dest='outtmpl', metavar='TEMPLATE', help='output filename template. Use %(title)s to get the title, %(uploader)s for the uploader name, %(uploader_id)s for the uploader nickname if different, %(autonumber)s to get an automatically incremented number, %(ext)s for the filename extension, %(upload_date)s for the upload date (YYYYMMDD), %(extractor)s for the provider (youtube, metacafe, etc), %(id)s for the video id and %% for a literal percent. Use - to output to stdout. Can also be used to download to a different directory, for example with -o \'/my/downloads/%(uploader)s/%(title)s-%(id)s.%(ext)s\' .')
filesystem.add_option('--restrict-filenames',
action='store_true', dest='restrictfilenames',
help='Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames', default=False)
filesystem.add_option('-a', '--batch-file', filesystem.add_option('-a', '--batch-file',
dest='batchfile', metavar='FILE', help='file containing URLs to download (\'-\' for stdin)') dest='batchfile', metavar='FILE', help='file containing URLs to download (\'-\' for stdin)')
filesystem.add_option('-w', '--no-overwrites', filesystem.add_option('-w', '--no-overwrites',
@ -306,6 +253,8 @@ def parseOpts():
help='ffmpeg/avconv audio quality specification, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5)') help='ffmpeg/avconv audio quality specification, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5)')
postproc.add_option('-k', '--keep-video', action='store_true', dest='keepvideo', default=False, postproc.add_option('-k', '--keep-video', action='store_true', dest='keepvideo', default=False,
help='keeps the video file on disk after the post-processing; the video is erased by default') help='keeps the video file on disk after the post-processing; the video is erased by default')
postproc.add_option('--no-post-overwrites', action='store_true', dest='nopostoverwrites', default=False,
help='do not overwrite post-processed files; the post-processed files are overwritten by default')
parser.add_option_group(general) parser.add_option_group(general)
@ -326,59 +275,18 @@ def parseOpts():
return parser, opts, args return parser, opts, args
def gen_extractors():
""" Return a list of an instance of every supported extractor.
The order does matter; the first extractor matched is the one handling the URL.
"""
return [
YoutubePlaylistIE(),
YoutubeChannelIE(),
YoutubeUserIE(),
YoutubeSearchIE(),
YoutubeIE(),
MetacafeIE(),
DailymotionIE(),
GoogleIE(),
GoogleSearchIE(),
PhotobucketIE(),
YahooIE(),
YahooSearchIE(),
DepositFilesIE(),
FacebookIE(),
BlipTVUserIE(),
BlipTVIE(),
VimeoIE(),
MyVideoIE(),
ComedyCentralIE(),
EscapistIE(),
CollegeHumorIE(),
XVideosIE(),
SoundcloudIE(),
InfoQIE(),
MixcloudIE(),
StanfordOpenClassroomIE(),
MTVIE(),
YoukuIE(),
XNXXIE(),
GooglePlusIE(),
PornotubeIE(),
YouPornIE(),
YouJizzIE(),
GenericIE()
]
def _real_main(): def _real_main():
parser, opts, args = parseOpts() parser, opts, args = parseOpts()
# Open appropriate CookieJar # Open appropriate CookieJar
if opts.cookiefile is None: if opts.cookiefile is None:
jar = cookielib.CookieJar() jar = compat_cookiejar.CookieJar()
else: else:
try: try:
jar = cookielib.MozillaCookieJar(opts.cookiefile) jar = compat_cookiejar.MozillaCookieJar(opts.cookiefile)
if os.path.isfile(opts.cookiefile) and os.access(opts.cookiefile, os.R_OK): if os.path.isfile(opts.cookiefile) and os.access(opts.cookiefile, os.R_OK):
jar.load() jar.load()
except (IOError, OSError), err: except (IOError, OSError) as err:
sys.exit(u'ERROR: unable to open cookie file') sys.exit(u'ERROR: unable to open cookie file')
# Set user agent # Set user agent
if opts.user_agent is not None: if opts.user_agent is not None:
@ -386,7 +294,7 @@ def _real_main():
# Dump user agent # Dump user agent
if opts.dump_user_agent: if opts.dump_user_agent:
print std_headers['User-Agent'] print(std_headers['User-Agent'])
sys.exit(0) sys.exit(0)
# Batch file verification # Batch file verification
@ -403,22 +311,22 @@ def _real_main():
except IOError: except IOError:
sys.exit(u'ERROR: batch file could not be read') sys.exit(u'ERROR: batch file could not be read')
all_urls = batchurls + args all_urls = batchurls + args
all_urls = map(lambda url: url.strip(), all_urls) all_urls = [url.strip() for url in all_urls]
# General configuration # General configuration
cookie_processor = urllib2.HTTPCookieProcessor(jar) cookie_processor = compat_urllib_request.HTTPCookieProcessor(jar)
proxy_handler = urllib2.ProxyHandler() proxy_handler = compat_urllib_request.ProxyHandler()
opener = urllib2.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler()) opener = compat_urllib_request.build_opener(proxy_handler, cookie_processor, YoutubeDLHandler())
urllib2.install_opener(opener) compat_urllib_request.install_opener(opener)
socket.setdefaulttimeout(300) # 5 minutes should be enough (famous last words) socket.setdefaulttimeout(300) # 5 minutes should be enough (famous last words)
extractors = gen_extractors() extractors = gen_extractors()
if opts.list_extractors: if opts.list_extractors:
for ie in extractors: for ie in extractors:
print(ie.IE_NAME) print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else ''))
matchedUrls = filter(lambda url: ie.suitable(url), all_urls) matchedUrls = [url for url in all_urls if ie.suitable(url)]
all_urls = filter(lambda url: url not in matchedUrls, all_urls) all_urls = [url for url in all_urls if url not in matchedUrls]
for mu in matchedUrls: for mu in matchedUrls:
print(u' ' + mu) print(u' ' + mu)
sys.exit(0) sys.exit(0)
@ -428,14 +336,10 @@ def _real_main():
parser.error(u'using .netrc conflicts with giving username/password') parser.error(u'using .netrc conflicts with giving username/password')
if opts.password is not None and opts.username is None: if opts.password is not None and opts.username is None:
parser.error(u'account username missing') parser.error(u'account username missing')
if opts.outtmpl is not None and (opts.useliteral or opts.usetitle or opts.autonumber or opts.useid): if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid):
parser.error(u'using output template conflicts with using title, literal title, video ID or auto number') parser.error(u'using output template conflicts with using title, video ID or auto number')
if opts.usetitle and opts.useliteral:
parser.error(u'using title conflicts with using literal title')
if opts.usetitle and opts.useid: if opts.usetitle and opts.useid:
parser.error(u'using title conflicts with using video ID') parser.error(u'using title conflicts with using video ID')
if opts.useliteral and opts.useid:
parser.error(u'using literal title conflicts with using video ID')
if opts.username is not None and opts.password is None: if opts.username is not None and opts.password is None:
opts.password = getpass.getpass(u'Type account password and press return:') opts.password = getpass.getpass(u'Type account password and press return:')
if opts.ratelimit is not None: if opts.ratelimit is not None:
@ -445,20 +349,25 @@ def _real_main():
opts.ratelimit = numeric_limit opts.ratelimit = numeric_limit
if opts.retries is not None: if opts.retries is not None:
try: try:
opts.retries = long(opts.retries) opts.retries = int(opts.retries)
except (TypeError, ValueError), err: except (TypeError, ValueError) as err:
parser.error(u'invalid retry count specified') parser.error(u'invalid retry count specified')
if opts.buffersize is not None:
numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize)
if numeric_buffersize is None:
parser.error(u'invalid buffer size specified')
opts.buffersize = numeric_buffersize
try: try:
opts.playliststart = int(opts.playliststart) opts.playliststart = int(opts.playliststart)
if opts.playliststart <= 0: if opts.playliststart <= 0:
raise ValueError(u'Playlist start must be positive') raise ValueError(u'Playlist start must be positive')
except (TypeError, ValueError), err: except (TypeError, ValueError) as err:
parser.error(u'invalid playlist start number specified') parser.error(u'invalid playlist start number specified')
try: try:
opts.playlistend = int(opts.playlistend) opts.playlistend = int(opts.playlistend)
if opts.playlistend != -1 and (opts.playlistend <= 0 or opts.playlistend < opts.playliststart): if opts.playlistend != -1 and (opts.playlistend <= 0 or opts.playlistend < opts.playliststart):
raise ValueError(u'Playlist end must be greater than playlist start') raise ValueError(u'Playlist end must be greater than playlist start')
except (TypeError, ValueError), err: except (TypeError, ValueError) as err:
parser.error(u'invalid playlist end number specified') parser.error(u'invalid playlist end number specified')
if opts.extractaudio: if opts.extractaudio:
if opts.audioformat not in ['best', 'aac', 'mp3', 'vorbis', 'm4a', 'wav']: if opts.audioformat not in ['best', 'aac', 'mp3', 'vorbis', 'm4a', 'wav']:
@ -468,6 +377,18 @@ def _real_main():
if not opts.audioquality.isdigit(): if not opts.audioquality.isdigit():
parser.error(u'invalid audio quality specified') parser.error(u'invalid audio quality specified')
if sys.version_info < (3,):
# In Python 2, sys.argv is a bytestring (also note http://bugs.python.org/issue2128 for Windows systems)
if opts.outtmpl is not None:
opts.outtmpl = opts.outtmpl.decode(preferredencoding())
outtmpl =((opts.outtmpl is not None and opts.outtmpl)
or (opts.format == '-1' and opts.usetitle and u'%(title)s-%(id)s-%(format)s.%(ext)s')
or (opts.format == '-1' and u'%(id)s-%(format)s.%(ext)s')
or (opts.usetitle and opts.autonumber and u'%(autonumber)s-%(title)s-%(id)s.%(ext)s')
or (opts.usetitle and u'%(title)s-%(id)s.%(ext)s')
or (opts.useid and u'%(id)s.%(ext)s')
or (opts.autonumber and u'%(autonumber)s-%(id)s.%(ext)s')
or u'%(id)s.%(ext)s')
# File downloader # File downloader
fd = FileDownloader({ fd = FileDownloader({
'usenetrc': opts.usenetrc, 'usenetrc': opts.usenetrc,
@ -485,21 +406,14 @@ def _real_main():
'format': opts.format, 'format': opts.format,
'format_limit': opts.format_limit, 'format_limit': opts.format_limit,
'listformats': opts.listformats, 'listformats': opts.listformats,
'outtmpl': ((opts.outtmpl is not None and opts.outtmpl.decode(preferredencoding())) 'outtmpl': outtmpl,
or (opts.format == '-1' and opts.usetitle and u'%(stitle)s-%(id)s-%(format)s.%(ext)s') 'restrictfilenames': opts.restrictfilenames,
or (opts.format == '-1' and opts.useliteral and u'%(title)s-%(id)s-%(format)s.%(ext)s')
or (opts.format == '-1' and u'%(id)s-%(format)s.%(ext)s')
or (opts.usetitle and opts.autonumber and u'%(autonumber)s-%(stitle)s-%(id)s.%(ext)s')
or (opts.useliteral and opts.autonumber and u'%(autonumber)s-%(title)s-%(id)s.%(ext)s')
or (opts.usetitle and u'%(stitle)s-%(id)s.%(ext)s')
or (opts.useliteral and u'%(title)s-%(id)s.%(ext)s')
or (opts.useid and u'%(id)s.%(ext)s')
or (opts.autonumber and u'%(autonumber)s-%(id)s.%(ext)s')
or u'%(id)s.%(ext)s'),
'ignoreerrors': opts.ignoreerrors, 'ignoreerrors': opts.ignoreerrors,
'ratelimit': opts.ratelimit, 'ratelimit': opts.ratelimit,
'nooverwrites': opts.nooverwrites, 'nooverwrites': opts.nooverwrites,
'retries': opts.retries, 'retries': opts.retries,
'buffersize': opts.buffersize,
'noresizebuffer': opts.noresizebuffer,
'continuedl': opts.continue_dl, 'continuedl': opts.continue_dl,
'noprogress': opts.noprogress, 'noprogress': opts.noprogress,
'playliststart': opts.playliststart, 'playliststart': opts.playliststart,
@ -517,9 +431,21 @@ def _real_main():
'max_downloads': opts.max_downloads, 'max_downloads': opts.max_downloads,
'prefer_free_formats': opts.prefer_free_formats, 'prefer_free_formats': opts.prefer_free_formats,
'verbose': opts.verbose, 'verbose': opts.verbose,
'test': opts.test,
}) })
if opts.verbose: if opts.verbose:
fd.to_screen(u'[debug] youtube-dl version ' + __version__)
try:
sp = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], stdout=subprocess.PIPE, stderr=subprocess.PIPE,
cwd=os.path.dirname(os.path.abspath(__file__)))
out, err = sp.communicate()
out = out.decode().strip()
if re.match('[0-9a-f]+', out):
fd.to_screen(u'[debug] Git HEAD: ' + out)
except:
pass
fd.to_screen(u'[debug] Python version %s - %s' %(platform.python_version(), platform.platform()))
fd.to_screen(u'[debug] Proxy map: ' + str(proxy_handler.proxies)) fd.to_screen(u'[debug] Proxy map: ' + str(proxy_handler.proxies))
for extractor in extractors: for extractor in extractors:
@ -527,11 +453,11 @@ def _real_main():
# PostProcessors # PostProcessors
if opts.extractaudio: if opts.extractaudio:
fd.add_post_processor(FFmpegExtractAudioPP(preferredcodec=opts.audioformat, preferredquality=opts.audioquality, keepvideo=opts.keepvideo)) fd.add_post_processor(FFmpegExtractAudioPP(preferredcodec=opts.audioformat, preferredquality=opts.audioquality, keepvideo=opts.keepvideo, nopostoverwrites=opts.nopostoverwrites))
# Update version # Update version
if opts.update_self: if opts.update_self:
updateSelf(fd, sys.argv[0]) update_self(fd.to_screen, opts.verbose, sys.argv[0])
# Maybe do nothing # Maybe do nothing
if len(all_urls) < 1: if len(all_urls) < 1:
@ -550,7 +476,7 @@ def _real_main():
if opts.cookiefile is not None: if opts.cookiefile is not None:
try: try:
jar.save() jar.save()
except (IOError, OSError), err: except (IOError, OSError) as err:
sys.exit(u'ERROR: unable to save cookie jar') sys.exit(u'ERROR: unable to save cookie jar')
sys.exit(retcode) sys.exit(retcode)

View File

@ -1,7 +1,17 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*-
import __init__ # Execute with
# $ python youtube_dl/__main__.py (2.6+)
# $ python -m youtube_dl (2.7+)
import sys
if __package__ is None and not hasattr(sys, "frozen"):
# direct call of __main__.py
import os.path
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import youtube_dl
if __name__ == '__main__': if __name__ == '__main__':
__init__.main() youtube_dl.main()

160
youtube_dl/update.py Normal file
View File

@ -0,0 +1,160 @@
import json
import traceback
import hashlib
from zipimport import zipimporter
from .utils import *
from .version import __version__
def rsa_verify(message, signature, key):
from struct import pack
from hashlib import sha256
from sys import version_info
def b(x):
if version_info[0] == 2: return x
else: return x.encode('latin1')
assert(type(message) == type(b('')))
block_size = 0
n = key[0]
while n:
block_size += 1
n >>= 8
signature = pow(int(signature, 16), key[1], key[0])
raw_bytes = []
while signature:
raw_bytes.insert(0, pack("B", signature & 0xFF))
signature >>= 8
signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes)
if signature[0:2] != b('\x00\x01'): return False
signature = signature[2:]
if not b('\x00') in signature: return False
signature = signature[signature.index(b('\x00'))+1:]
if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False
signature = signature[19:]
if signature != sha256(message).digest(): return False
return True
def update_self(to_screen, verbose, filename):
"""Update the program file with the latest version from the repository"""
UPDATE_URL = "http://rg3.github.com/youtube-dl/update/"
VERSION_URL = UPDATE_URL + 'LATEST_VERSION'
JSON_URL = UPDATE_URL + 'versions.json'
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
if not isinstance(globals().get('__loader__'), zipimporter) and not hasattr(sys, "frozen"):
to_screen(u'It looks like you installed youtube-dl with pip, setup.py or a tarball. Please use that to update.')
return
# Check if there is a new version
try:
newversion = compat_urllib_request.urlopen(VERSION_URL).read().decode('utf-8').strip()
except:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: can\'t find the current version. Please try again later.')
return
if newversion == __version__:
to_screen(u'youtube-dl is up-to-date (' + __version__ + ')')
return
# Download and check versions info
try:
versions_info = compat_urllib_request.urlopen(JSON_URL).read().decode('utf-8')
versions_info = json.loads(versions_info)
except:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: can\'t obtain versions info. Please try again later.')
return
if not 'signature' in versions_info:
to_screen(u'ERROR: the versions file is not signed or corrupted. Aborting.')
return
signature = versions_info['signature']
del versions_info['signature']
if not rsa_verify(json.dumps(versions_info, sort_keys=True).encode('utf-8'), signature, UPDATES_RSA_KEY):
to_screen(u'ERROR: the versions file signature is invalid. Aborting.')
return
to_screen(u'Updating to version ' + versions_info['latest'] + '...')
version = versions_info['versions'][versions_info['latest']]
if version.get('notes'):
to_screen(u'PLEASE NOTE:')
for note in version['notes']:
to_screen(note)
if not os.access(filename, os.W_OK):
to_screen(u'ERROR: no write permissions on %s' % filename)
return
# Py2EXE
if hasattr(sys, "frozen"):
exe = os.path.abspath(filename)
directory = os.path.dirname(exe)
if not os.access(directory, os.W_OK):
to_screen(u'ERROR: no write permissions on %s' % directory)
return
try:
urlh = compat_urllib_request.urlopen(version['exe'][0])
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: unable to download latest version')
return
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
if newcontent_hash != version['exe'][1]:
to_screen(u'ERROR: the downloaded file hash does not match. Aborting.')
return
try:
with open(exe + '.new', 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: unable to write the new version')
return
try:
bat = os.path.join(directory, 'youtube-dl-updater.bat')
b = open(bat, 'w')
b.write("""
echo Updating youtube-dl...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "%s.new" "%s"
del "%s"
\n""" %(exe, exe, bat))
b.close()
os.startfile(bat)
except (IOError, OSError) as err:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: unable to overwrite current version')
return
# Zip unix package
elif isinstance(globals().get('__loader__'), zipimporter):
try:
urlh = compat_urllib_request.urlopen(version['bin'][0])
newcontent = urlh.read()
urlh.close()
except (IOError, OSError) as err:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: unable to download latest version')
return
newcontent_hash = hashlib.sha256(newcontent).hexdigest()
if newcontent_hash != version['bin'][1]:
to_screen(u'ERROR: the downloaded file hash does not match. Aborting.')
return
try:
with open(filename, 'wb') as outf:
outf.write(newcontent)
except (IOError, OSError) as err:
if verbose: to_screen(compat_str(traceback.format_exc()))
to_screen(u'ERROR: unable to overwrite current version')
return
to_screen(u'Updated youtube-dl. Restart youtube-dl to use the new version.')

View File

@ -2,21 +2,151 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import gzip import gzip
import htmlentitydefs import io
import HTMLParser import json
import locale import locale
import os import os
import re import re
import sys import sys
import traceback
import zlib import zlib
import urllib2
import email.utils import email.utils
import json import json
try: try:
import cStringIO as StringIO import urllib.request as compat_urllib_request
except ImportError: # Python 2
import urllib2 as compat_urllib_request
try:
import urllib.error as compat_urllib_error
except ImportError: # Python 2
import urllib2 as compat_urllib_error
try:
import urllib.parse as compat_urllib_parse
except ImportError: # Python 2
import urllib as compat_urllib_parse
try:
from urllib.parse import urlparse as compat_urllib_parse_urlparse
except ImportError: # Python 2
from urlparse import urlparse as compat_urllib_parse_urlparse
try:
import http.cookiejar as compat_cookiejar
except ImportError: # Python 2
import cookielib as compat_cookiejar
try:
import html.entities as compat_html_entities
except ImportError: # Python 2
import htmlentitydefs as compat_html_entities
try:
import html.parser as compat_html_parser
except ImportError: # Python 2
import HTMLParser as compat_html_parser
try:
import http.client as compat_http_client
except ImportError: # Python 2
import httplib as compat_http_client
try:
from subprocess import DEVNULL
compat_subprocess_get_DEVNULL = lambda: DEVNULL
except ImportError: except ImportError:
import StringIO compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w')
try:
from urllib.parse import parse_qs as compat_parse_qs
except ImportError: # Python 2
# HACK: The following is the correct parse_qs implementation from cpython 3's stdlib.
# Python 2's version is apparently totally broken
def _unquote(string, encoding='utf-8', errors='replace'):
if string == '':
return string
res = string.split('%')
if len(res) == 1:
return string
if encoding is None:
encoding = 'utf-8'
if errors is None:
errors = 'replace'
# pct_sequence: contiguous sequence of percent-encoded bytes, decoded
pct_sequence = b''
string = res[0]
for item in res[1:]:
try:
if not item:
raise ValueError
pct_sequence += item[:2].decode('hex')
rest = item[2:]
if not rest:
# This segment was just a single percent-encoded character.
# May be part of a sequence of code units, so delay decoding.
# (Stored in pct_sequence).
continue
except ValueError:
rest = '%' + item
# Encountered non-percent-encoded characters. Flush the current
# pct_sequence.
string += pct_sequence.decode(encoding, errors) + rest
pct_sequence = b''
if pct_sequence:
# Flush the final pct_sequence
string += pct_sequence.decode(encoding, errors)
return string
def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
qs, _coerce_result = qs, unicode
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
r = []
for name_value in pairs:
if not name_value and not strict_parsing:
continue
nv = name_value.split('=', 1)
if len(nv) != 2:
if strict_parsing:
raise ValueError("bad query field: %r" % (name_value,))
# Handle case of a control-name with no equal sign
if keep_blank_values:
nv.append('')
else:
continue
if len(nv[1]) or keep_blank_values:
name = nv[0].replace('+', ' ')
name = _unquote(name, encoding=encoding, errors=errors)
name = _coerce_result(name)
value = nv[1].replace('+', ' ')
value = _unquote(value, encoding=encoding, errors=errors)
value = _coerce_result(value)
r.append((name, value))
return r
def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
parsed_result = {}
pairs = _parse_qsl(qs, keep_blank_values, strict_parsing,
encoding=encoding, errors=errors)
for name, value in pairs:
if name in parsed_result:
parsed_result[name].append(value)
else:
parsed_result[name] = [value]
return parsed_result
try:
compat_str = unicode # Python 2
except NameError:
compat_str = str
try:
compat_chr = unichr # Python 2
except NameError:
compat_chr = chr
std_headers = { std_headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0',
@ -32,19 +162,35 @@ def preferredencoding():
Returns the best encoding scheme for the system, based on Returns the best encoding scheme for the system, based on
locale.getpreferredencoding() and some further tweaks. locale.getpreferredencoding() and some further tweaks.
""" """
def yield_preferredencoding():
try: try:
pref = locale.getpreferredencoding() pref = locale.getpreferredencoding()
u'TEST'.encode(pref) u'TEST'.encode(pref)
except: except:
pref = 'UTF-8' pref = 'UTF-8'
while True:
yield pref
return yield_preferredencoding().next()
return pref
if sys.version_info < (3,0):
def compat_print(s):
print(s.encode(preferredencoding(), 'xmlcharrefreplace'))
else:
def compat_print(s):
assert type(s) == type(u'')
print(s)
# In Python 2.x, json.dump expects a bytestream.
# In Python 3.x, it writes to a character stream
if sys.version_info < (3,0):
def write_json_file(obj, fn):
with open(fn, 'wb') as f:
json.dump(obj, f)
else:
def write_json_file(obj, fn):
with open(fn, 'w', encoding='utf-8') as f:
json.dump(obj, f)
def htmlentity_transform(matchobj): def htmlentity_transform(matchobj):
"""Transforms an HTML entity to a Unicode character. """Transforms an HTML entity to a character.
This function receives a match object and is intended to be used with This function receives a match object and is intended to be used with
the re.sub() function. the re.sub() function.
@ -52,11 +198,10 @@ def htmlentity_transform(matchobj):
entity = matchobj.group(1) entity = matchobj.group(1)
# Known non-numeric HTML entity # Known non-numeric HTML entity
if entity in htmlentitydefs.name2codepoint: if entity in compat_html_entities.name2codepoint:
return unichr(htmlentitydefs.name2codepoint[entity]) return compat_chr(compat_html_entities.name2codepoint[entity])
# Unicode character mobj = re.match(u'(?u)#(x?\\d+)', entity)
mobj = re.match(ur'(?u)#(x?\d+)', entity)
if mobj is not None: if mobj is not None:
numstr = mobj.group(1) numstr = mobj.group(1)
if numstr.startswith(u'x'): if numstr.startswith(u'x'):
@ -64,28 +209,28 @@ def htmlentity_transform(matchobj):
numstr = u'0%s' % numstr numstr = u'0%s' % numstr
else: else:
base = 10 base = 10
return unichr(long(numstr, base)) return compat_chr(int(numstr, base))
# Unknown entity in name, return its literal representation # Unknown entity in name, return its literal representation
return (u'&%s;' % entity) return (u'&%s;' % entity)
HTMLParser.locatestarttagend = re.compile(r"""<[a-zA-Z][-.a-zA-Z0-9:_]*(?:\s+(?:(?<=['"\s])[^\s/>][^\s/=>]*(?:\s*=+\s*(?:'[^']*'|"[^"]*"|(?!['"])[^>\s]*))?\s*)*)?\s*""", re.VERBOSE) # backport bugfix compat_html_parser.locatestarttagend = re.compile(r"""<[a-zA-Z][-.a-zA-Z0-9:_]*(?:\s+(?:(?<=['"\s])[^\s/>][^\s/=>]*(?:\s*=+\s*(?:'[^']*'|"[^"]*"|(?!['"])[^>\s]*))?\s*)*)?\s*""", re.VERBOSE) # backport bugfix
class IDParser(HTMLParser.HTMLParser): class AttrParser(compat_html_parser.HTMLParser):
"""Modified HTMLParser that isolates a tag with the specified id""" """Modified HTMLParser that isolates a tag with the specified attribute"""
def __init__(self, id): def __init__(self, attribute, value):
self.id = id self.attribute = attribute
self.value = value
self.result = None self.result = None
self.started = False self.started = False
self.depth = {} self.depth = {}
self.html = None self.html = None
self.watch_startpos = False self.watch_startpos = False
self.error_count = 0 self.error_count = 0
HTMLParser.HTMLParser.__init__(self) compat_html_parser.HTMLParser.__init__(self)
def error(self, message): def error(self, message):
#print >> sys.stderr, self.getpos()
if self.error_count > 10 or self.started: if self.error_count > 10 or self.started:
raise HTMLParser.HTMLParseError(message, self.getpos()) raise compat_html_parser.HTMLParseError(message, self.getpos())
self.rawdata = '\n'.join(self.html.split('\n')[self.getpos()[0]:]) # skip one line self.rawdata = '\n'.join(self.html.split('\n')[self.getpos()[0]:]) # skip one line
self.error_count += 1 self.error_count += 1
self.goahead(1) self.goahead(1)
@ -99,7 +244,7 @@ class IDParser(HTMLParser.HTMLParser):
attrs = dict(attrs) attrs = dict(attrs)
if self.started: if self.started:
self.find_startpos(None) self.find_startpos(None)
if 'id' in attrs and attrs['id'] == self.id: if self.attribute in attrs and attrs[self.attribute] == self.value:
self.result = [tag] self.result = [tag]
self.started = True self.started = True
self.watch_startpos = True self.watch_startpos = True
@ -124,8 +269,10 @@ class IDParser(HTMLParser.HTMLParser):
handle_decl = handle_pi = unknown_decl = find_startpos handle_decl = handle_pi = unknown_decl = find_startpos
def get_result(self): def get_result(self):
if self.result == None: return None if self.result is None:
if len(self.result) != 3: return None return None
if len(self.result) != 3:
return None
lines = self.html.split('\n') lines = self.html.split('\n')
lines = lines[self.result[1][0]-1:self.result[2][0]] lines = lines[self.result[1][0]-1:self.result[2][0]]
lines[0] = lines[0][self.result[1][1]:] lines[0] = lines[0][self.result[1][1]:]
@ -135,11 +282,15 @@ class IDParser(HTMLParser.HTMLParser):
return '\n'.join(lines).strip() return '\n'.join(lines).strip()
def get_element_by_id(id, html): def get_element_by_id(id, html):
"""Return the content of the tag with the specified id in the passed HTML document""" """Return the content of the tag with the specified ID in the passed HTML document"""
parser = IDParser(id) return get_element_by_attribute("id", id, html)
def get_element_by_attribute(attribute, value, html):
"""Return the content of the tag with the specified attribute in the passed HTML document"""
parser = AttrParser(attribute, value)
try: try:
parser.loads(html) parser.loads(html)
except HTMLParser.HTMLParseError: except compat_html_parser.HTMLParseError:
pass pass
return parser.get_result() return parser.get_result()
@ -148,7 +299,8 @@ def clean_html(html):
"""Clean an HTML snippet into a readable string""" """Clean an HTML snippet into a readable string"""
# Newline vs <br /> # Newline vs <br />
html = html.replace('\n', ' ') html = html.replace('\n', ' ')
html = re.sub('\s*<\s*br\s*/?\s*>\s*', '\n', html) html = re.sub(r'\s*<\s*br\s*/?\s*>\s*', '\n', html)
html = re.sub(r'<\s*/\s*p\s*>\s*<\s*p[^>]*>', '\n', html)
# Strip html tags # Strip html tags
html = re.sub('<.*?>', '', html) html = re.sub('<.*?>', '', html)
# Replace html entities # Replace html entities
@ -174,9 +326,9 @@ def sanitize_open(filename, open_mode):
return (sys.stdout, filename) return (sys.stdout, filename)
stream = open(encodeFilename(filename), open_mode) stream = open(encodeFilename(filename), open_mode)
return (stream, filename) return (stream, filename)
except (IOError, OSError), err: except (IOError, OSError) as err:
# In case of error, try to remove win32 forbidden chars # In case of error, try to remove win32 forbidden chars
filename = re.sub(ur'[/<>:"\|\?\*]', u'#', filename) filename = re.sub(u'[/<>:"\\|\\\\?\\*]', u'#', filename)
# An exception here should be caught in the caller # An exception here should be caught in the caller
stream = open(encodeFilename(filename), open_mode) stream = open(encodeFilename(filename), open_mode)
@ -191,23 +343,37 @@ def timeconvert(timestr):
timestamp = email.utils.mktime_tz(timetuple) timestamp = email.utils.mktime_tz(timetuple)
return timestamp return timestamp
def sanitize_filename(s): def sanitize_filename(s, restricted=False, is_id=False):
"""Sanitizes a string so it could be used as part of a filename.""" """Sanitizes a string so it could be used as part of a filename.
If restricted is set, use a stricter subset of allowed characters.
Set is_id if this is not an arbitrary string, but an ID that should be kept if possible
"""
def replace_insane(char): def replace_insane(char):
if char == '?' or ord(char) < 32 or ord(char) == 127: if char == '?' or ord(char) < 32 or ord(char) == 127:
return '' return ''
elif char == '"': elif char == '"':
return '\'' return '' if restricted else '\''
elif char == ':': elif char == ':':
return ' -' return '_-' if restricted else ' -'
elif char in '\\/|*<>': elif char in '\\/|*<>':
return '-' return '_'
if restricted and (char in '!&\'()[]{}$;`^,#' or char.isspace()):
return '_'
if restricted and ord(char) > 127:
return '_'
return char return char
result = u''.join(map(replace_insane, s)) result = u''.join(map(replace_insane, s))
while '--' in result: if not is_id:
result = result.replace('--', '-') while '__' in result:
return result.strip('-') result = result.replace('__', '_')
result = result.strip('_')
# Common case of "Foreign band name - English song title"
if restricted and result.startswith('-_'):
result = result[2:]
if not result:
result = '_'
return result
def orderedSet(iterable): def orderedSet(iterable):
""" Remove all duplicates from the input iterable """ """ Remove all duplicates from the input iterable """
@ -219,20 +385,24 @@ def orderedSet(iterable):
def unescapeHTML(s): def unescapeHTML(s):
""" """
@param s a string (of type unicode) @param s a string
""" """
assert type(s) == type(u'') assert type(s) == type(u'')
result = re.sub(ur'(?u)&(.+?);', htmlentity_transform, s) result = re.sub(u'(?u)&(.+?);', htmlentity_transform, s)
return result return result
def encodeFilename(s): def encodeFilename(s):
""" """
@param s The name of the file (of type unicode) @param s The name of the file
""" """
assert type(s) == type(u'') assert type(s) == type(u'')
# Python 3 has a Unicode API
if sys.version_info >= (3, 0):
return s
if sys.platform == 'win32' and sys.getwindowsversion()[0] >= 5: if sys.platform == 'win32' and sys.getwindowsversion()[0] >= 5:
# Pass u'' directly to use Unicode APIs on Windows 2000 and up # Pass u'' directly to use Unicode APIs on Windows 2000 and up
# (Detecting Windows NT 4 is tricky because 'major >= 4' would # (Detecting Windows NT 4 is tricky because 'major >= 4' would
@ -241,6 +411,20 @@ def encodeFilename(s):
else: else:
return s.encode(sys.getfilesystemencoding(), 'ignore') return s.encode(sys.getfilesystemencoding(), 'ignore')
class ExtractorError(Exception):
"""Error during info extraction."""
def __init__(self, msg, tb=None):
""" tb, if given, is the original traceback (so that it can be printed out). """
super(ExtractorError, self).__init__(msg)
self.traceback = tb
def format_traceback(self):
if self.traceback is None:
return None
return u''.join(traceback.format_tb(self.traceback))
class DownloadError(Exception): class DownloadError(Exception):
"""Download Error exception. """Download Error exception.
@ -297,15 +481,7 @@ class ContentTooShortError(Exception):
self.downloaded = downloaded self.downloaded = downloaded
self.expected = expected self.expected = expected
class YoutubeDLHandler(compat_urllib_request.HTTPHandler):
class Trouble(Exception):
"""Trouble helper exception
This is an exception to be handled with
FileDownloader.trouble
"""
class YoutubeDLHandler(urllib2.HTTPHandler):
"""Handler for HTTP requests and responses. """Handler for HTTP requests and responses.
This class, when installed with an OpenerDirector, automatically adds This class, when installed with an OpenerDirector, automatically adds
@ -332,9 +508,9 @@ class YoutubeDLHandler(urllib2.HTTPHandler):
@staticmethod @staticmethod
def addinfourl_wrapper(stream, headers, url, code): def addinfourl_wrapper(stream, headers, url, code):
if hasattr(urllib2.addinfourl, 'getcode'): if hasattr(compat_urllib_request.addinfourl, 'getcode'):
return urllib2.addinfourl(stream, headers, url, code) return compat_urllib_request.addinfourl(stream, headers, url, code)
ret = urllib2.addinfourl(stream, headers, url) ret = compat_urllib_request.addinfourl(stream, headers, url)
ret.code = code ret.code = code
return ret return ret
@ -353,12 +529,15 @@ class YoutubeDLHandler(urllib2.HTTPHandler):
old_resp = resp old_resp = resp
# gzip # gzip
if resp.headers.get('Content-encoding', '') == 'gzip': if resp.headers.get('Content-encoding', '') == 'gzip':
gz = gzip.GzipFile(fileobj=StringIO.StringIO(resp.read()), mode='r') gz = gzip.GzipFile(fileobj=io.BytesIO(resp.read()), mode='r')
resp = self.addinfourl_wrapper(gz, old_resp.headers, old_resp.url, old_resp.code) resp = self.addinfourl_wrapper(gz, old_resp.headers, old_resp.url, old_resp.code)
resp.msg = old_resp.msg resp.msg = old_resp.msg
# deflate # deflate
if resp.headers.get('Content-encoding', '') == 'deflate': if resp.headers.get('Content-encoding', '') == 'deflate':
gz = StringIO.StringIO(self.deflate(resp.read())) gz = io.BytesIO(self.deflate(resp.read()))
resp = self.addinfourl_wrapper(gz, old_resp.headers, old_resp.url, old_resp.code) resp = self.addinfourl_wrapper(gz, old_resp.headers, old_resp.url, old_resp.code)
resp.msg = old_resp.msg resp.msg = old_resp.msg
return resp return resp
https_request = http_request
https_response = http_response

2
youtube_dl/version.py Normal file
View File

@ -0,0 +1,2 @@
__version__ = '2013.01.02'