Bug 1226413 - Allow task images to be built upon push r=wcosta

This commit is contained in:
Gregory Arndt 2015-11-16 12:01:15 -06:00
parent 2c9799531b
commit 018af18395
18 changed files with 593 additions and 94 deletions

View File

@ -5,8 +5,90 @@ hacking on gecko.
## Organization
Each folder describes a single docker image.
These images depend on one another, as described in the `FROM` line at the top of the Dockerfile in each folder.
Each folder describes a single docker image. We have two types of images that can be defined:
1. [Task Images (build-on-push)](#task-images-build-on-push)
2. [Docker Images (prebuilt)](#docker-registry-images-prebuilt)
These images depend on one another, as described in the [`FROM`](https://docs.docker.com/v1.8/reference/builder/#from)
line at the top of the Dockerfile in each folder.
Images could either be an image intended for pushing to a docker registry, or one that is meant either
for local testing or being built as an artifact when pushed to vcs.
### Task Images (build-on-push)
Images can be uploaded as a task artifact, [indexed](#task-image-index-namespace) under
a given namespace, and used in other tasks by referencing the task ID.
Important to note, these images do not require building and pushing to a docker registry, and are
build per push (if necessary) and uploaded as task artifacts.
The decision task that is run per push will [determine](#context-directory-hashing)
if the image needs to be built based on the hash of the context directory and if the image
exists under the namespace for a given branch.
As an additional convenience, and a precaution to loading images per branch, if an image
has been indexed with a given context hash for mozilla-central, any tasks requiring that image
will use that indexed task. This is to ensure there are not multiple images built/used
that were built from the same context. In summary, if the image has been built for mozilla-central,
pushes to any branch will use that already built image.
To use within an in-tree task definition, the format is:
```yaml
image:
type: 'task-image'
path: 'public/image.tar'
taskId: '{{#task_id_for_image}}builder{{/task_id_for_image}}'
```
##### Context Directory Hashing
Decision tasks will calculate the sha256 hash of the contents of the image
directory and will determine if the image already exists for a given branch and hash
or if a new image must be built and indexed.
Note: this is the contents of *only* the context directory, not the
image contents.
The decision task will:
1. Recursively collect the paths of all files within the context directory
2. Sort the filenames alphabetically to ensure the hash is consistently calculated
3. Generate a sha256 hash of the contents of each file.
4. All file hashes will then be combined with their path and used to update the hash
of the context directory.
This ensures that the hash is consistently calculated and path changes will result
in different hashes being generated.
##### Task Image Index Namespace
Images that are built on push and uploaded as an artifact of a task will be indexed under the
following namespaces.
* docker.images.v1.{project}.{image_name}.latest
* docker.images.v1.{project}.{image_name}.pushdate.{year}.{month}-{day}-{pushtime}
* docker.images.v1.{project}.{image_name}.hash.{context_hash}
Not only can images be browsed by the pushdate and context hash, but the 'latest' namespace
is meant to view the latest built image. This functions similarly to the 'latest' tag
for docker images that are pushed to a registry.
### Docker Registry Images (prebuilt)
***Deprecation Warning: Use of prebuilt images should only be used for base images (those that other images
will inherit from), or private images that must be stored in a private docker registry account. Existing
public images will be converted to images that are built on push and any newly added image should
follow this pattern.***
These are images that are intended to be pushed to a docker registry and used by specifying the
folder name in task definitions. This information is automatically populated by using the 'docker_image'
convenience method in task definitions.
Example:
image: {#docker_image}builder{/docker_image}
Each image has a version, given by its `VERSION` file. This should be bumped when any changes are made that will be deployed into taskcluster.
Then, older tasks which were designed to run on an older version of the image can still be executed in taskcluster, while new tasks can use the new version.
@ -26,8 +108,14 @@ To build an image, invoke `build.sh` with the name of the folder (without a trai
This is a tiny wrapper around building the docker images via `docker
build -t $REGISTRY/$FOLDER:$FOLDER_VERSION`
On completion, `build.sh` gives a command to upload the image to the registry, but this is not necessary until the image is ready for production usage.
Docker will successfully find the local, tagged image while you continue to hack on the image definitions.
Note: If no "VERSION" file present in the image directory, the tag 'latest' will be used and no
registry user will be defined. The image is only meant to run locally and will overwrite
any existing image with the same name and tag.
On completion, if the image has been tagged with a version and registry, `build.sh` gives a
command to upload the image to the registry, but this is not necessary until the image
is ready for production usage. Docker will successfully find the local, tagged image
while you continue to hack on the image definitions.
## Adding a new image

View File

@ -38,6 +38,7 @@ build() {
local folder="$gecko_root/testing/docker/$image_name"
local folder_reg="$folder/REGISTRY"
local folder_ver="$folder/VERSION"
local could_deploy=false
if [ "$image_name" == "" ];
then
@ -46,21 +47,29 @@ build() {
fi
test -d "$folder" || usage_err "Unknown image: $image_name"
test -f "$folder_ver" || usage_err "$folder must contain VERSION file"
# Fallback to default registry if one is not in the folder...
if [ ! -f "$folder_reg" ]; then
folder_reg=$PWD/REGISTRY
# Assume that if an image context directory does not contain a VERSION file then
# it is not suitable for deploying. Default to using 'latest' as the tag and
# warn the user at the end.
if [ ! -f $folder_ver ]; then
echo "This image does not container a VERSION file. Will use 'latest' as the image version"
local tag="$image_name:latest"
else
local version=$(cat $folder_ver)
test -n "$version" || usage_err "$folder_ver is empty aborting..."
# Fallback to default registry if one is not in the folder...
if [ ! -f "$folder_reg" ]; then
folder_reg=$PWD/REGISTRY
fi
local registry=$(cat $folder_reg)
test -n "$registry" || usage_err "$folder_reg is empty aborting..."
local tag="$registry/$image_name:$version"
local could_deploy=true
fi
local registry=$(cat $folder_reg)
local version=$(cat $folder_ver)
test -n "$registry" || usage_err "$folder_reg is empty aborting..."
test -n "$version" || usage_err "$folder_ver is empty aborting..."
local tag="$registry/$image_name:$version"
if [ -f $folder/build.sh ]; then
shift
$folder/build.sh -t $tag $* || exit 1
@ -71,7 +80,18 @@ build() {
fi
echo "Success built $image_name and tagged with $tag"
echo "If deploying now you can run 'docker push $tag'"
if [ "$could_deploy" = true ]; then
echo "If deploying now you can run 'docker push $tag'"
else
echo "*****************************************************************"
echo "WARNING: No VERSION file was found in the image directory."
echo "Image has not been prepared for deploying at this time."
echo "However, the image can be run locally. To prepare to "
echo "push to a user account on a docker registry, tag the image "
echo "by running 'docker tag $tag [REGISTRYHOST/][USERNAME/]NAME[:TAG]"
echo "prior to running 'docker push'."
echo "*****************************************************************"
fi
}
if ! which docker > /dev/null; then

View File

@ -1,6 +1,7 @@
FROM quay.io/mozilla/b2g-build:0.2.9
MAINTAINER Dustin J. Mitchell <dustin@mozilla.com>
ENV VERSION 1.2
ENV PYTHONPATH /tools/tools/lib/python:$PYTHONPATH
ENV TOOLTOOL_CACHE /home/worker/tools/tooltool-cache

View File

@ -1 +1,2 @@
taskcluster

View File

@ -0,0 +1,34 @@
FROM ubuntu:14.04
WORKDIR /home/worker/bin
RUN apt-get update && apt-get install -y apt-transport-https
RUN sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 && \
sudo sh -c "echo deb https://get.docker.io/ubuntu docker main\
> /etc/apt/sources.list.d/docker.list"
RUN apt-get update && apt-get install -y \
lxc-docker-1.6.1 \
curl \
wget \
git \
mercurial \
tar \
zip \
unzip \
vim \
sudo \
ca-certificates \
build-essential
ENV NODE_VERSION v0.12.4
RUN cd /usr/local/ && \
curl https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-linux-x64.tar.gz | tar -xz --strip-components 1 && \
node -v
RUN npm install -g taskcluster-vcs@2.3.11
ADD bin /home/worker/bin
RUN chmod +x /home/worker/bin/*
# Set a default command useful for debugging
CMD ["/bin/bash", "--login"]

View File

@ -0,0 +1 @@
taskcluster

View File

@ -0,0 +1 @@
0.1.3

View File

@ -0,0 +1,35 @@
#!/bin/bash -vex
# Set bash options to exit immediately if a pipeline exists non-zero, expand
# print a trace of commands, and make output verbose (print shell input as it's
# read)
# See https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
set -x -e -v
# Prefix errors with taskcluster error prefix so that they are parsed by Treeherder
raise_error() {
echo
echo "[taskcluster:error] Error: $1"
exit 1
}
# Ensure that the PROJECT is specified so the image can be indexed
test -n "$PROJECT" || raise_error "Project must be provided."
test -n "$HASH" || raise_error "Context Hash must be provided."
mkdir /artifacts
if [ ! -z "$CONTEXT_URL" ]; then
mkdir /context
curl -L "$CONTEXT_URL" | tar -xz --strip-components 1 -C /context
CONTEXT_PATH=/context
else
tc-vcs checkout /home/worker/workspace/src $BASE_REPOSITORY $HEAD_REPOSITORY $HEAD_REV $HEAD_REF
CONTEXT_PATH=/home/worker/workspace/src/$CONTEXT_PATH
fi
test -d $CONTEXT_PATH || raise_error "Context Path $CONTEXT_PATH does not exist."
test -f "$CONTEXT_PATH/Dockerfile" || raise_error "Dockerfile must be present in $CONTEXT_PATH."
docker build -t $PROJECT:$HASH $CONTEXT_PATH
docker save $PROJECT:$HASH > /artifacts/image.tar

View File

@ -23,42 +23,17 @@ from mach.decorators import (
ROOT = os.path.dirname(os.path.realpath(__file__))
GECKO = os.path.realpath(os.path.join(ROOT, '..', '..'))
DOCKER_ROOT = os.path.join(ROOT, '..', 'docker')
# XXX: If/when we have the taskcluster queue use construct url instead
ARTIFACT_URL = 'https://queue.taskcluster.net/v1/task/{}/artifacts/{}'
REGISTRY = open(os.path.join(DOCKER_ROOT, 'REGISTRY')).read().strip()
DEFINE_TASK = 'queue:define-task:aws-provisioner-v1/{}'
TREEHERDER_ROUTE_PREFIX = 'tc-treeherder-stage'
TREEHERDER_ROUTES = {
'staging': 'tc-treeherder-stage',
'production': 'tc-treeherder'
}
DEFAULT_TRY = 'try: -b do -p all -u all'
DEFAULT_JOB_PATH = os.path.join(
ROOT, 'tasks', 'branches', 'base_jobs.yml'
)
def docker_image(name):
''' Determine the docker tag/revision from an in tree docker file '''
repository_path = os.path.join(DOCKER_ROOT, name, 'REGISTRY')
repository = REGISTRY
version = open(os.path.join(DOCKER_ROOT, name, 'VERSION')).read().strip()
if os.path.isfile(repository_path):
repository = open(repository_path).read().strip()
return '{}/{}:{}'.format(repository, name, version)
def get_task(task_id):
import urllib2
return json.load(urllib2.urlopen("https://queue.taskcluster.net/v1/task/" + task_id))
def gaia_info():
'''
Fetch details from in tree gaia.json (which links this version of
@ -88,42 +63,6 @@ def gaia_info():
'gaia_ref': gaia['git']['branch'],
}
def decorate_task_treeherder_routes(task, suffix):
"""
Decorate the given task with treeherder routes.
Uses task.extra.treeherderEnv if available otherwise defaults to only
staging.
:param dict task: task definition.
:param str suffix: The project/revision_hash portion of the route.
"""
if 'extra' not in task:
return
if 'routes' not in task:
task['routes'] = []
treeheder_env = task['extra'].get('treeherderEnv', ['staging'])
for env in treeheder_env:
task['routes'].append('{}.{}'.format(TREEHERDER_ROUTES[env], suffix))
def decorate_task_json_routes(task, json_routes, parameters):
"""
Decorate the given task with routes.json routes.
:param dict task: task definition.
:param json_routes: the list of routes to use from routes.json
:param parameters: dictionary of parameters to use in route templates
"""
routes = task.get('routes', [])
for route in json_routes:
routes.append(route.format(**parameters))
task['routes'] = routes
def configure_dependent_task(task_path, parameters, taskid, templates, build_treeherder_config):
"""
Configure a build dependent task. This is shared between post-build and test tasks.
@ -330,8 +269,17 @@ class Graph(object):
action='store_true', default=False,
help="Stub out taskIds and date fields from the task definitions.")
def create_graph(self, **params):
from taskcluster_graph.commit_parser import parse_commit
from functools import partial
from slugid import nice as slugid
import taskcluster_graph.transform.routes as routes_transform
from taskcluster_graph.commit_parser import parse_commit
from taskcluster_graph.image_builder import (
docker_image,
normalize_image_details,
task_id_for_image
)
from taskcluster_graph.from_now import (
json_time_from_now,
current_json_time,
@ -374,17 +322,20 @@ class Graph(object):
pushdate = time.strftime('%Y%m%d%H%M%S', time.gmtime(pushinfo.pushdate))
# Template parameters used when expanding the graph
seen_images = {}
parameters = dict(gaia_info().items() + {
'index': 'index',
'project': project,
'pushlog_id': params.get('pushlog_id', 0),
'docker_image': docker_image,
'task_id_for_image': partial(task_id_for_image, seen_images, project),
'base_repository': params['base_repository'] or \
params['head_repository'],
'head_repository': params['head_repository'],
'head_ref': params['head_ref'] or params['head_rev'],
'head_rev': params['head_rev'],
'pushdate': pushdate,
'pushtime': pushdate[8:],
'year': pushdate[0:4],
'month': pushdate[4:6],
'day': pushdate[6:8],
@ -412,8 +363,11 @@ class Graph(object):
}
if params['revision_hash']:
for env in TREEHERDER_ROUTES:
graph['scopes'].append('queue:route:{}.{}'.format(TREEHERDER_ROUTES[env], treeherder_route))
for env in routes_transform.TREEHERDER_ROUTES:
route = 'queue:route:{}.{}'.format(
routes_transform.TREEHERDER_ROUTES[env],
treeherder_route)
graph['scopes'].append(route)
graph['metadata'] = {
'source': 'http://todo.com/what/goes/here',
@ -438,6 +392,11 @@ class Graph(object):
build_parameters['build_type'] = task_extra['build_type']
build_parameters['build_product'] = task_extra['build_product']
normalize_image_details(graph,
build_task,
seen_images,
build_parameters,
os.environ.get('TASK_ID', None))
set_interactive_task(build_task, interactive)
# try builds don't use cache
@ -445,11 +404,11 @@ class Graph(object):
remove_caches_from_task(build_task)
if params['revision_hash']:
decorate_task_treeherder_routes(build_task['task'],
treeherder_route)
decorate_task_json_routes(build_task['task'],
json_routes,
build_parameters)
routes_transform.decorate_task_treeherder_routes(build_task['task'],
treeherder_route)
routes_transform.decorate_task_json_routes(build_task['task'],
json_routes,
build_parameters)
# Ensure each build graph is valid after construction.
taskcluster_graph.build_task.validate(build_task)
@ -531,6 +490,11 @@ class Graph(object):
slugid(),
templates,
build_treeherder_config)
normalize_image_details(graph,
post_task,
seen_images,
build_parameters,
os.environ.get('TASK_ID', None))
set_interactive_task(post_task, interactive)
graph['tasks'].append(post_task)
@ -571,11 +535,18 @@ class Graph(object):
slugid(),
templates,
build_treeherder_config)
normalize_image_details(graph,
test_task,
seen_images,
build_parameters,
os.environ.get('TASK_ID', None))
set_interactive_task(test_task, interactive)
if params['revision_hash']:
decorate_task_treeherder_routes(
test_task['task'], treeherder_route)
routes_transform.decorate_task_treeherder_routes(
test_task['task'],
treeherder_route
)
graph['tasks'].append(test_task)

View File

@ -14,5 +14,10 @@
"{index}.gecko.v2.{project}.revision.{head_rev}.{build_product}-l10n.{build_name}-{build_type}.{locale}",
"{index}.gecko.v2.{project}.pushdate.{year}.{month}.{day}.{pushdate}.{build_product}-l10n.{build_name}-{build_type}.{locale}",
"{index}.gecko.v2.{project}.latest.{build_product}-l10n.{build_name}-{build_type}.{locale}"
],
"docker_images": [
"{index}.docker.images.v1.{project}.{image_name}.latest",
"{index}.docker.images.v1.{project}.{image_name}.pushdate.{year}.{month}-{day}-{pushtime}",
"{index}.docker.images.v1.{project}.{image_name}.hash.{context_hash}"
]
}

View File

@ -0,0 +1,225 @@
import hashlib
import json
import os
import tarfile
import urllib2
import taskcluster_graph.transform.routes as routes_transform
from slugid import nice as slugid
from taskcluster_graph.templates import Templates
TASKCLUSTER_ROOT = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))
IMAGE_BUILD_TASK = os.path.join(TASKCLUSTER_ROOT, 'tasks', 'image.yml')
GECKO = os.path.realpath(os.path.join(TASKCLUSTER_ROOT, '..', '..'))
DOCKER_ROOT = os.path.join(GECKO, 'testing', 'docker')
REGISTRY = open(os.path.join(DOCKER_ROOT, 'REGISTRY')).read().strip()
INDEX_URL = 'https://index.taskcluster.net/v1/task/docker.images.v1.{}.{}.hash.{}'
ARTIFACT_URL = 'https://queue.taskcluster.net/v1/task/{}/artifacts/{}'
DEFINE_TASK = 'queue:define-task:aws-provisioner-v1/{}'
def is_docker_registry_image(registry_path):
return os.path.isfile(registry_path)
def docker_image(name):
''' Determine the docker tag/revision from an in tree docker file '''
repository_path = os.path.join(DOCKER_ROOT, name, 'REGISTRY')
repository = REGISTRY
version = open(os.path.join(DOCKER_ROOT, name, 'VERSION')).read().strip()
if os.path.isfile(repository_path):
repository = open(repository_path).read().strip()
return '{}/{}:{}'.format(repository, name, version)
def task_id_for_image(seen_images, project, name):
if name in seen_images:
return seen_images[name]['taskId']
context_path = os.path.join('testing', 'docker', name)
context_hash = generate_context_hash(context_path)
task_id = get_task_id_for_namespace(project, name, context_hash)
if task_id:
seen_images[name] = {'taskId': task_id}
return task_id
task_id = slugid()
seen_images[name] = {
'taskId': task_id,
'path': context_path,
'hash': context_hash
}
return task_id
def image_artifact_exists_for_task_id(task_id, path):
''' Verifies that the artifact exists for the task ID '''
try:
request = urllib2.Request(ARTIFACT_URL.format(task_id, path))
request.get_method = lambda : 'HEAD'
urllib2.urlopen(request)
return True
except urllib2.HTTPError,e:
return False
def get_task_id_for_namespace(project, name, context_hash):
'''
Determine the Task ID for an indexed image.
As an optimization, if the context hash exists for mozilla-central, that image
task ID will be used. The reasoning behind this is that eventually everything ends
up on mozilla-central at some point if most tasks use this as a common image
for a given context hash, a worker within Taskcluster does not need to contain
the same image per branch.
'''
for p in ['mozilla-central', project]:
image_index_url = INDEX_URL.format(p, name, context_hash)
try:
task = json.load(urllib2.urlopen(image_index_url))
# Ensure that the artifact exists for the task and hasn't expired
artifact_exists = image_artifact_exists_for_task_id(task['taskId'],
'public/image.tar')
# Only return the task ID if the artifact exists for the indexed
# task. Otherwise, continue on looking at each of the branches. Method
# continues trying other branches in case mozilla-central has an expired
# artifact, but 'project' might not. Only return no task ID if all
# branches have been tried
if artifact_exists:
return task['taskId']
except urllib2.HTTPError:
pass
return None
def generate_context_hash(image_path):
'''
Generates a sha256 hash for context directory used to build an image.
Contents of the directory are sorted alphabetically, contents of each file is hashed,
and then a hash is created for both the file hashs as well as their paths.
This ensures that hashs are consistent and also change based on if file locations
within the context directory change.
'''
context_hash = hashlib.sha256()
files = []
for dirpath, dirnames, filenames in os.walk(os.path.join(GECKO, image_path)):
for filename in filenames:
files.append(os.path.join(dirpath, filename))
for filename in sorted(files):
with open(filename, 'rb') as f:
file_hash = hashlib.sha256()
while True:
data = f.read()
if not data:
break
file_hash.update(data)
context_hash.update(file_hash.hexdigest() + '\t' + filename + '\n')
return context_hash.hexdigest()
def create_context_tar(context_dir, destination, image_name):
''' Creates a tar file of a particular context directory '''
if not os.path.exists(os.path.dirname(destination)):
os.makedirs(os.path.dirname(destination))
with tarfile.open(destination, 'w:gz') as tar:
tar.add(context_dir, arcname=image_name)
def image_requires_building(details):
''' Returns true if an image task should be created for a particular image '''
if 'path' in details and 'hash' in details:
return True
else:
return False
def create_image_task_parameters(params, name, details):
image_parameters = dict(params)
image_parameters['context_hash'] = details['hash']
image_parameters['context_path'] = details['path']
image_parameters['artifact_path'] = 'public/image.tar'
image_parameters['image_slugid'] = details['taskId']
image_parameters['image_name'] = name
return image_parameters
def get_image_details(seen_images, task_id):
'''
Based on a collection of image details, return the details
for an image matching the requested task_id.
Image details can include a path and hash indicating that the image requires
building.
'''
for name, details in seen_images.items():
if details['taskId'] == task_id:
return [name, details]
return None
def get_json_routes():
''' Returns routes that should be included in the image task. '''
routes_file = os.path.join(TASKCLUSTER_ROOT, 'routes.json')
with open(routes_file) as f:
contents = json.load(f)
json_routes = contents['docker_images']
return json_routes
def normalize_image_details(graph, task, seen_images, params, decision_task_id):
'''
This takes a task-image payload and creates an image task to build that
image.
task-image payload is then converted to use a specific task ID of that
built image. All tasks within the graph requiring this same image will have their
image details normalized and require the same image build task.
'''
image = task['task']['payload']['image']
if isinstance(image, str) or image.get('type', 'docker-image') == 'docker-image':
return
if 'requires' not in task:
task['requires'] = []
name, details = get_image_details(seen_images, image['taskId'])
if details.get('required', False) is True or image_requires_building(details) is False:
if 'required' in details:
task['requires'].append(details['taskId'])
return
image_parameters = create_image_task_parameters(params, name, details)
if decision_task_id:
image_artifact_path = "public/decision_task/image_contexts/{}/context.tar.gz".format(name)
destination = "/home/worker/artifacts/decision_task/image_contexts/{}/context.tar.gz".format(name)
image_parameters['context_url'] = ARTIFACT_URL.format(decision_task_id, image_artifact_path)
create_context_tar(image_parameters['context_path'], destination, name)
templates = Templates(TASKCLUSTER_ROOT)
image_task = templates.load(IMAGE_BUILD_TASK, image_parameters)
if params['revision_hash']:
routes_transform.decorate_task_treeherder_routes(
image_task['task'],
"{}.{}".format(params['project'], params['revision_hash'])
)
routes_transform.decorate_task_json_routes(image_task['task'],
get_json_routes(),
image_parameters)
graph['tasks'].append(image_task);
task['requires'].append(details['taskId'])
define_task = DEFINE_TASK.format(
image_task['task']['workerType']
)
graph['scopes'].append(define_task)
graph['scopes'].extend(image_task['task'].get('scopes', []))
route_scopes = map(lambda route: 'queue:route:' + route, image_task['task'].get('routes', []))
graph['scopes'].extend(route_scopes)
details['required'] = True

View File

@ -0,0 +1,42 @@
TREEHERDER_ROUTE_PREFIX = 'tc-treeherder-stage'
TREEHERDER_ROUTES = {
'staging': 'tc-treeherder-stage',
'production': 'tc-treeherder'
}
def decorate_task_treeherder_routes(task, suffix):
"""
Decorate the given task with treeherder routes.
Uses task.extra.treeherderEnv if available otherwise defaults to only
staging.
:param dict task: task definition.
:param str suffix: The project/revision_hash portion of the route.
"""
if 'extra' not in task:
return
if 'routes' not in task:
task['routes'] = []
treeheder_env = task['extra'].get('treeherderEnv', ['staging'])
for env in treeheder_env:
task['routes'].append('{}.{}'.format(TREEHERDER_ROUTES[env], suffix))
def decorate_task_json_routes(task, json_routes, parameters):
"""
Decorate the given task with routes.json routes.
:param dict task: task definition.
:param json_routes: the list of routes to use from routes.json
:param parameters: dictionary of parameters to use in route templates
"""
routes = task.get('routes', [])
for route in json_routes:
routes.append(route.format(**parameters))
task['routes'] = routes

View File

@ -25,7 +25,6 @@ task:
# the board.
- 'docker-worker:cache:tc-vcs'
payload:
image: '{{#docker_image}}builder{{/docker_image}}'

View File

@ -11,6 +11,11 @@ task:
MOZCONFIG: 'b2g/config/mozconfigs/linux64_gecko/nightly'
TOOLTOOL_MANIFEST: 'b2g/config/tooltool-manifests/linux64/releng.manifest'
image:
type: 'task-image'
path: 'public/image.tar'
taskId: '{{#task_id_for_image}}builder{{/task_id_for_image}}'
command:
- /bin/bash
- -c

View File

@ -66,16 +66,24 @@ tasks:
- -cx
- >
source ./bin/decision.sh &&
mkdir -p /home/worker/artifacts &&
./mach taskcluster-graph
--pushlog-id='{{pushlog_id}}'
--message='{{comment}}'
--project='{{project}}'
--owner='{{owner}}'
--revision-hash='{{revision_hash}}'
--extend-graph > /home/worker/graph.json
--extend-graph > /home/worker/artifacts/graph.json
graphs:
- /home/worker/graph.json
- /home/worker/artifacts/graph.json
artifacts:
'public':
type: 'directory'
path: '/home/worker/artifacts'
# Arbitrary value for keeping these artifacts around. They are just the
# graph.json and context directories for now, so nothing that needs
# to stay around for long.
expires: '{{#from_now}}7 days{{/from_now}}'
extra:
treeherder:
symbol: D

View File

@ -71,6 +71,7 @@ tasks:
- /bin/bash
- -cx
- >
mkdir -p /home/worker/artifacts &&
checkout-gecko workspace &&
cd workspace/gecko &&
./mach taskcluster-graph
@ -79,9 +80,19 @@ tasks:
--message='{{comment}}'
--owner='{{owner}}'
--revision-hash='{{revision_hash}}'
--extend-graph > /home/worker/graph.json
--extend-graph > /home/worker/artifacts/graph.json
graphs:
- /home/worker/graph.json
- /home/worker/artifacts/graph.json
artifacts:
'public':
type: 'directory'
path: '/home/worker/artifacts'
# Arbitrary value for keeping these artifacts around. They are just the
# graph.json and context directories for now, so nothing that needs
# to stay around for long.
expires: '{{#from_now}}7 days{{/from_now}}'
extra:
treeherder:

View File

@ -0,0 +1,52 @@
# This is the "base" task which contains the common values all builds must
# provide.
---
taskId: '{{image_slugid}}'
task:
created: '{{now}}'
deadline: '{{#from_now}}24 hours{{/from_now}}'
metadata:
name: 'Docker Artifact Image Builder'
description: 'Builder for docker images as artifacts'
source: http://todo.com/soon
owner: mozilla-taskcluster-maintenance@mozilla.com
tags:
createdForUser: {{owner}}
workerType: taskcluster-images
provisionerId: aws-provisioner-v1
schedulerId: task-graph-scheduler
payload:
env:
HASH: '{{context_hash}}'
PROJECT: '{{project}}'
CONTEXT_URL: '{{context_url}}'
CONTEXT_PATH: '{{context_path}}'
BASE_REPOSITORY: '{{base_repository}}'
HEAD_REPOSITORY: '{{head_repository}}'
HEAD_REV: '{{head_rev}}'
HEAD_REF: '{{head_ref}}'
features:
dind: true
image: '{{#docker_image}}image_builder{{/docker_image}}'
command:
- /bin/bash
- -c
- /home/worker/bin/build_image.sh
maxRunTime: 3600
artifacts:
'{{artifact_path}}':
type: 'file'
path: '/artifacts/image.tar'
expires: '{{#from_now}}1 year{{/from_now}}'
extra:
treeherderEnv:
- staging
- production
treeherder:
build:
platform: 'taskcluster-images'
symbol: 'I'