Imported Upstream version 6.10.0.49

Former-commit-id: 1d6753294b2993e1fbf92de9366bb9544db4189b
This commit is contained in:
Xamarin Public Jenkins (auto-signing)
2020-01-16 16:38:04 +00:00
parent d94e79959b
commit 468663ddbb
48518 changed files with 2789335 additions and 61176 deletions

View File

@ -0,0 +1,4 @@
DisableFormat: true
# Disabling formatting doesn't implicitly disable include sorting
SortIncludes: false

View File

@ -0,0 +1,29 @@
# Module level initialization for the `lldbsuite` module.
import inspect
import os
import sys
def find_lldb_root():
lldb_root = os.path.dirname(inspect.getfile(inspect.currentframe()))
while True:
lldb_root = os.path.dirname(lldb_root)
if lldb_root is None:
return None
test_path = os.path.join(lldb_root, "use_lldb_suite_root.py")
if os.path.isfile(test_path):
return lldb_root
return None
# lldbsuite.lldb_root refers to the root of the git/svn source checkout
lldb_root = find_lldb_root()
# lldbsuite.lldb_test_root refers to the root of the python test tree
lldb_test_root = os.path.join(
lldb_root,
"packages",
"Python",
"lldbsuite",
"test")

View File

@ -0,0 +1,55 @@
# pre\_kill\_hook package
## Overview
The pre\_kill\_hook package provides a per-platform method for running code
after a test process times out but before the concurrent test runner kills the
timed-out process.
## Detailed Description of Usage
If a platform defines the hook, then the hook gets called right after a timeout
is detected in a test run, but before the process is killed.
The pre-kill-hook mechanism works as follows:
* When a timeout is detected in the process_control.ProcessDriver class that
runs the per-test lldb process, a new overridable on\_timeout\_pre\_kill() method
is called on the ProcessDriver instance.
* The concurrent test driver's derived ProcessDriver overrides this method. It
looks to see if a module called
"lldbsuite.pre\_kill\_hook.{platform-system-name}" module exists, where
platform-system-name is replaced with platform.system().lower(). (e.g.
"Darwin" becomes the darwin.py module).
* If that module doesn't exist, the rest of the new behavior is skipped.
* If that module does exist, it is loaded, and the method
"do\_pre\_kill(process\_id, context\_dict, output\_stream)" is called. If
that method throws an exception, we log it and we ignore further processing
of the pre-killed process.
* The process\_id argument of the do\_pre\_kill function is the process id as
returned by the ProcessDriver.pid property.
* The output\_stream argument of the do\_pre\_kill function takes a file-like
object. Output to be collected from doing any processing on the
process-to-be-killed should be written into the file-like object. The
current impl uses a six.StringIO and then writes this output to
{TestFilename}-{pid}.sample in the session directory.
* Platforms where platform.system() is "Darwin" will get a pre-kill action that
runs the 'sample' program on the lldb that has timed out. That data will be
collected on CI and analyzed to determine what is happening during timeouts.
(This has an advantage over a core in that it is much smaller and that it
clearly demonstrates any liveness of the process, if there is any).
## Running the tests
To run the tests in the pre\_kill\_hook package, open a console, change into
this directory and run the following:
```
python -m unittest discover
```

View File

@ -0,0 +1 @@
"""Initialize the package."""

View File

@ -0,0 +1,46 @@
"""Provides a pre-kill method to run on macOS."""
from __future__ import print_function
# system imports
import subprocess
import sys
# third-party module imports
import six
def do_pre_kill(process_id, runner_context, output_stream, sample_time=3):
"""Samples the given process id, and puts the output to output_stream.
@param process_id the local process to sample.
@param runner_context a dictionary of details about the architectures
and platform on which the given process is running. Expected keys are
archs (array of architectures), platform_name, platform_url, and
platform_working_dir.
@param output_stream file-like object that should be used to write the
results of sampling.
@param sample_time specifies the time in seconds that should be captured.
"""
# Validate args.
if runner_context is None:
raise Exception("runner_context argument is required")
if not isinstance(runner_context, dict):
raise Exception("runner_context argument must be a dictionary")
# We will try to run sample on the local host only if there is no URL
# to a remote.
if "platform_url" in runner_context and (
runner_context["platform_url"] is not None):
import pprint
sys.stderr.write(
"warning: skipping timeout pre-kill sample invocation because we "
"don't know how to run on a remote yet. runner_context={}\n"
.format(pprint.pformat(runner_context)))
output = subprocess.check_output(['sample', six.text_type(process_id),
str(sample_time)])
output_stream.write(output)

View File

@ -0,0 +1,76 @@
"""Provides a pre-kill method to run on Linux.
This timeout pre-kill method relies on the Linux perf-tools
distribution. The appropriate way to obtain this set of tools
will depend on the Linux distribution.
For Ubuntu 16.04, the invoke the following command:
sudo apt-get install perf-tools-unstable
"""
from __future__ import print_function
# system imports
import os
import subprocess
import sys
import tempfile
def do_pre_kill(process_id, runner_context, output_stream, sample_time=3):
"""Samples the given process id, and puts the output to output_stream.
@param process_id the local process to sample.
@param runner_context a dictionary of details about the architectures
and platform on which the given process is running. Expected keys are
archs (array of architectures), platform_name, platform_url, and
platform_working_dir.
@param output_stream file-like object that should be used to write the
results of sampling.
@param sample_time specifies the time in seconds that should be captured.
"""
# Validate args.
if runner_context is None:
raise Exception("runner_context argument is required")
if not isinstance(runner_context, dict):
raise Exception("runner_context argument must be a dictionary")
# We will try to run sample on the local host only if there is no URL
# to a remote.
if "platform_url" in runner_context and (
runner_context["platform_url"] is not None):
import pprint
sys.stderr.write(
"warning: skipping timeout pre-kill sample invocation because we "
"don't know how to run on a remote yet. runner_context={}\n"
.format(pprint.pformat(runner_context)))
# We're going to create a temp file, and immediately overwrite it with the
# following command. This just ensures we don't have any races in
# creation of the temporary sample file.
fileno, filename = tempfile.mkstemp(suffix='perfdata')
os.close(fileno)
fileno = None
try:
with open(os.devnull, 'w') as devnull:
returncode = subprocess.call(['timeout', str(sample_time), 'perf',
'record', '-g', '-o', filename, '-p', str(process_id)],
stdout=devnull, stderr=devnull)
if returncode == 0 or returncode == 124:
# This is okay - this is the timeout return code, which is totally
# expected.
pass
else:
raise Exception("failed to call 'perf record .., error: {}".format(
returncode))
with open(os.devnull, 'w') as devnull:
output = subprocess.check_output(['perf', 'report', '--call-graph',
'--stdio', '-i', filename], stderr=devnull)
output_stream.write(output)
finally:
os.remove(filename)

View File

@ -0,0 +1,107 @@
"""Test the pre-kill hook on Darwin."""
from __future__ import print_function
# system imports
from multiprocessing import Process, Queue
import platform
import re
from unittest import main, TestCase
# third party
from six import StringIO
def do_child_process(child_work_queue, parent_work_queue, verbose):
import os
pid = os.getpid()
if verbose:
print("child: pid {} started, sending to parent".format(pid))
parent_work_queue.put(pid)
if verbose:
print("child: waiting for shut-down request from parent")
child_work_queue.get()
if verbose:
print("child: received shut-down request. Child exiting.")
class DarwinPreKillTestCase(TestCase):
def __init__(self, methodName):
super(DarwinPreKillTestCase, self).__init__(methodName)
self.process = None
self.child_work_queue = None
self.verbose = False
def tearDown(self):
if self.verbose:
print("parent: sending shut-down request to child")
if self.process:
self.child_work_queue.put("hello, child")
self.process.join()
if self.verbose:
print("parent: child is fully shut down")
def test_sample(self):
# Ensure we're Darwin.
if platform.system() != 'Darwin':
self.skipTest("requires a Darwin-based OS")
# Start the child process.
self.child_work_queue = Queue()
parent_work_queue = Queue()
self.process = Process(target=do_child_process,
args=(self.child_work_queue, parent_work_queue,
self.verbose))
if self.verbose:
print("parent: starting child")
self.process.start()
# Wait for the child to report its pid. Then we know we're running.
if self.verbose:
print("parent: waiting for child to start")
child_pid = parent_work_queue.get()
# Sample the child process.
from darwin import do_pre_kill
context_dict = {
"archs": [platform.machine()],
"platform_name": None,
"platform_url": None,
"platform_working_dir": None
}
if self.verbose:
print("parent: running pre-kill action on child")
output_io = StringIO()
do_pre_kill(child_pid, context_dict, output_io)
output = output_io.getvalue()
if self.verbose:
print("parent: do_pre_kill() wrote the following output:", output)
self.assertIsNotNone(output)
# We should have a line with:
# Process: .* [{pid}]
process_re = re.compile(r"Process:[^[]+\[([^]]+)\]")
match = process_re.search(output)
self.assertIsNotNone(match, "should have found process id for "
"sampled process")
self.assertEqual(1, len(match.groups()))
self.assertEqual(child_pid, int(match.group(1)))
# We should see a Call graph: section.
callgraph_re = re.compile(r"Call graph:")
match = callgraph_re.search(output)
self.assertIsNotNone(match, "should have found the Call graph section"
"in sample output")
# We should see a Binary Images: section.
binary_images_re = re.compile(r"Binary Images:")
match = binary_images_re.search(output)
self.assertIsNotNone(match, "should have found the Binary Images "
"section in sample output")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,133 @@
"""Test the pre-kill hook on Linux."""
from __future__ import print_function
# system imports
from multiprocessing import Process, Queue
import platform
import re
import subprocess
from unittest import main, TestCase
# third party
from six import StringIO
def do_child_thread():
import os
x = 0
while True:
x = x + 42 * os.getpid()
return x
def do_child_process(child_work_queue, parent_work_queue, verbose):
import os
pid = os.getpid()
if verbose:
print("child: pid {} started, sending to parent".format(pid))
parent_work_queue.put(pid)
# Spin up a daemon thread to do some "work", which will show
# up in a sample of this process.
import threading
worker = threading.Thread(target=do_child_thread)
worker.daemon = True
worker.start()
if verbose:
print("child: waiting for shut-down request from parent")
child_work_queue.get()
if verbose:
print("child: received shut-down request. Child exiting.")
class LinuxPreKillTestCase(TestCase):
def __init__(self, methodName):
super(LinuxPreKillTestCase, self).__init__(methodName)
self.process = None
self.child_work_queue = None
self.verbose = False
# self.verbose = True
def tearDown(self):
if self.verbose:
print("parent: sending shut-down request to child")
if self.process:
self.child_work_queue.put("hello, child")
self.process.join()
if self.verbose:
print("parent: child is fully shut down")
def test_sample(self):
# Ensure we're Darwin.
if platform.system() != 'Linux':
self.skipTest("requires a Linux-based OS")
# Ensure we have the 'perf' tool. If not, skip the test.
try:
perf_version = subprocess.check_output(["perf", "version"])
if perf_version is None or not (
perf_version.startswith("perf version")):
raise Exception("The perf executable doesn't appear"
" to be the Linux perf tools perf")
except Exception:
self.skipTest("requires the Linux perf tools 'perf' command")
# Start the child process.
self.child_work_queue = Queue()
parent_work_queue = Queue()
self.process = Process(target=do_child_process,
args=(self.child_work_queue, parent_work_queue,
self.verbose))
if self.verbose:
print("parent: starting child")
self.process.start()
# Wait for the child to report its pid. Then we know we're running.
if self.verbose:
print("parent: waiting for child to start")
child_pid = parent_work_queue.get()
# Sample the child process.
from linux import do_pre_kill
context_dict = {
"archs": [platform.machine()],
"platform_name": None,
"platform_url": None,
"platform_working_dir": None
}
if self.verbose:
print("parent: running pre-kill action on child")
output_io = StringIO()
do_pre_kill(child_pid, context_dict, output_io)
output = output_io.getvalue()
if self.verbose:
print("parent: do_pre_kill() wrote the following output:", output)
self.assertIsNotNone(output)
# We should have a samples count entry.
# Samples:
self.assertTrue("Samples:" in output, "should have found a 'Samples:' "
"field in the sampled process output")
# We should see an event count entry
event_count_re = re.compile(r"Event count[^:]+:\s+(\d+)")
match = event_count_re.search(output)
self.assertIsNotNone(match, "should have found the event count entry "
"in sample output")
if self.verbose:
print("cpu-clock events:", match.group(1))
# We should see some percentages in the file.
percentage_re = re.compile(r"\d+\.\d+%")
match = percentage_re.search(output)
self.assertIsNotNone(match, "should have found at least one percentage "
"in the sample output")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,65 @@
"""
The LLVM Compiler Infrastructure
This file is distributed under the University of Illinois Open Source
License. See LICENSE.TXT for details.
Prepares language bindings for LLDB build process. Run with --help
to see a description of the supported command line arguments.
"""
# Python modules:
import io
# Third party modules
import six
def _encoded_read(old_read, encoding):
def impl(size):
result = old_read(size)
# If this is Python 2 then we need to convert the resulting `unicode` back
# into a `str` before returning
if six.PY2:
result = result.encode(encoding)
return result
return impl
def _encoded_write(old_write, encoding):
def impl(s):
# If we were asked to write a `str` (in Py2) or a `bytes` (in Py3) decode it
# as unicode before attempting to write.
if isinstance(s, six.binary_type):
s = s.decode(encoding)
return old_write(s)
return impl
'''
Create a Text I/O file object that can be written to with either unicode strings or byte strings
under Python 2 and Python 3, and automatically encodes and decodes as necessary to return the
native string type for the current Python version
'''
def open(
file,
encoding,
mode='r',
buffering=-1,
errors=None,
newline=None,
closefd=True):
wrapped_file = io.open(
file,
mode=mode,
buffering=buffering,
encoding=encoding,
errors=errors,
newline=newline,
closefd=closefd)
new_read = _encoded_read(getattr(wrapped_file, 'read'), encoding)
new_write = _encoded_write(getattr(wrapped_file, 'write'), encoding)
setattr(wrapped_file, 'read', new_read)
setattr(wrapped_file, 'write', new_write)
return wrapped_file

View File

@ -0,0 +1,65 @@
"""
The LLVM Compiler Infrastructure
This file is distributed under the University of Illinois Open Source
License. See LICENSE.TXT for details.
Prepares language bindings for LLDB build process. Run with --help
to see a description of the supported command line arguments.
"""
# Python modules:
import os
import platform
import sys
def _find_file_in_paths(paths, exe_basename):
"""Returns the full exe path for the first path match.
@params paths the list of directories to search for the exe_basename
executable
@params exe_basename the name of the file for which to search.
e.g. "swig" or "swig.exe".
@return the full path to the executable if found in one of the
given paths; otherwise, returns None.
"""
for path in paths:
trial_exe_path = os.path.join(path, exe_basename)
if os.path.exists(trial_exe_path):
return os.path.normcase(trial_exe_path)
return None
def find_executable(executable):
"""Finds the specified executable in the PATH or known good locations."""
# Figure out what we're looking for.
if platform.system() == "Windows":
executable = executable + ".exe"
extra_dirs = []
else:
extra_dirs = ["/usr/local/bin"]
# Figure out what paths to check.
path_env = os.environ.get("PATH", None)
if path_env is not None:
paths_to_check = path_env.split(os.path.pathsep)
else:
paths_to_check = []
# Add in the extra dirs
paths_to_check.extend(extra_dirs)
if len(paths_to_check) < 1:
raise os.OSError(
"executable was not specified, PATH has no "
"contents, and there are no extra directories to search")
result = _find_file_in_paths(paths_to_check, executable)
if not result or len(result) < 1:
raise os.OSError(
"failed to find exe='%s' in paths='%s'" %
(executable, paths_to_check))
return result

View File

@ -0,0 +1,24 @@
from __future__ import print_function
from __future__ import absolute_import
# System modules
import inspect
# Third-party modules
# LLDB modules
def requires_self(func):
func_argc = len(inspect.getargspec(func).args)
if func_argc == 0 or (
getattr(
func,
'im_self',
None) is not None) or (
hasattr(
func,
'__self__')):
return False
else:
return True

View File

@ -0,0 +1,29 @@
from __future__ import absolute_import
from __future__ import print_function
# System modules
import os
import re
GMODULES_SUPPORT_MAP = {}
GMODULES_HELP_REGEX = re.compile(r"\s-gmodules\s")
def is_compiler_clang_with_gmodules(compiler_path):
# Before computing the result, check if we already have it cached.
if compiler_path in GMODULES_SUPPORT_MAP:
return GMODULES_SUPPORT_MAP[compiler_path]
def _gmodules_supported_internal():
compiler = os.path.basename(compiler_path)
if "clang" not in compiler:
return False
else:
# Check the compiler help for the -gmodules option.
clang_help = os.popen("%s --help" % compiler_path).read()
return GMODULES_HELP_REGEX.search(
clang_help, re.DOTALL) is not None
GMODULES_SUPPORT_MAP[compiler_path] = _gmodules_supported_internal()
return GMODULES_SUPPORT_MAP[compiler_path]

View File

@ -0,0 +1,58 @@
# ====================================================================
# Provides a with-style resource handler for optionally-None resources
# ====================================================================
class optional_with(object):
# pylint: disable=too-few-public-methods
# This is a wrapper - it is not meant to provide any extra methods.
"""Provides a wrapper for objects supporting "with", allowing None.
This lets a user use the "with object" syntax for resource usage
(e.g. locks) even when the wrapped with object is None.
e.g.
wrapped_lock = optional_with(thread.Lock())
with wrapped_lock:
# Do something while the lock is obtained.
pass
might_be_none = None
wrapped_none = optional_with(might_be_none)
with wrapped_none:
# This code here still works.
pass
This prevents having to write code like this when
a lock is optional:
if lock:
lock.acquire()
try:
code_fragment_always_run()
finally:
if lock:
lock.release()
And I'd posit it is safer, as it becomes impossible to
forget the try/finally using optional_with(), since
the with syntax can be used.
"""
def __init__(self, wrapped_object):
self.wrapped_object = wrapped_object
def __enter__(self):
if self.wrapped_object is not None:
return self.wrapped_object.__enter__()
else:
return self
def __exit__(self, the_type, value, traceback):
if self.wrapped_object is not None:
return self.wrapped_object.__exit__(the_type, value, traceback)
else:
# Don't suppress any exceptions
return False

View File

@ -0,0 +1,25 @@
import six
if six.PY2:
import commands
get_command_output = commands.getoutput
get_command_status_output = commands.getstatusoutput
cmp_ = cmp
else:
def get_command_status_output(command):
try:
import subprocess
return (
0,
subprocess.check_output(
command,
shell=True,
universal_newlines=True))
except subprocess.CalledProcessError as e:
return (e.returncode, e.output)
def get_command_output(command):
return get_command_status_output(command)[1]
cmp_ = lambda x, y: (x > y) - (x < y)

View File

@ -0,0 +1,24 @@
"""
The LLVM Compiler Infrastructure
This file is distributed under the University of Illinois Open Source
License. See LICENSE.TXT for details.
Helper functions for working with sockets.
"""
# Python modules:
import io
import socket
# LLDB modules
import use_lldb_suite
def recvall(sock, size):
bytes = io.BytesIO()
while size > 0:
this_result = sock.recv(size)
bytes.write(this_result)
size -= len(this_result)
return bytes.getvalue()

View File

@ -0,0 +1,33 @@
LLDB_LEVEL := ..
include $(LLDB_LEVEL)/Makefile
.PHONY: programs
all:: check-local
#----------------------------------------------------------------------
# Make all of the test programs
#----------------------------------------------------------------------
programs:
find . -type d -depth 1 | xargs -J % find % \
-name Makefile \
-exec echo \; \
-exec echo make -f '{}' \; \
-execdir make \;
#----------------------------------------------------------------------
# Clean all of the test programs
#----------------------------------------------------------------------
clean::
find . -type d -depth 1 | xargs -J % find % \
-name Makefile \
-exec echo \; \
-exec echo make -f '{}' clean \; \
-execdir make clean \;
#----------------------------------------------------------------------
# Run the tests
#----------------------------------------------------------------------
check-local::
rm -rf lldb-test-traces
python $(PROJ_SRC_DIR)/dotest.py --executable $(ToolDir)/lldb -q -s lldb-test-traces -u CXXFLAGS -u CFLAGS -C $(subst ccache,,$(CC))

View File

@ -0,0 +1,173 @@
This README file describes the files and directories related to the Python test
suite under the current 'test' directory.
o dotest.py
Provides the test driver for the test suite. To invoke it, cd to the 'test'
directory and issue the './dotest.py' command or './dotest.py -v' for more
verbose output. '.dotest.py -h' prints out the help messge.
A specific naming pattern is followed by the .py script under the 'test'
directory in order to be recognized by 'dotest.py' test driver as a module
which implements a test case, namely, Test*.py.
Some example usages:
1. ./dotest.py -v . 2> ~/Developer/Log/lldbtest.log0
This runs the test suite and directs the run log to a file.
2. LLDB_LOG=/tmp/lldb.log GDB_REMOTE_LOG=/tmp/gdb-remote.log ./dotest.py -v . 2> ~/Developer/Log/lldbtest.log
This runs the test suite, with logging turned on for the lldb as well as
the process.gdb-remote channels and directs the run log to a file.
o lldbtest.py
Provides an abstract base class of lldb test case named 'TestBase', which in
turn inherits from Python's unittest.TestCase. The concrete subclass can
override lldbtest.TestBase in order to inherit the common behavior for
unittest.TestCase.setUp/tearDown implemented in this file.
To provide a test case, the concrete subclass provides methods whose names
start with the letters test. For more details about the Python's unittest
framework, go to http://docs.python.org/library/unittest.html.
./command_source/TestCommandSource.py provides a simple example of test case
which overrides lldbtest.TestBase to exercise the lldb's 'command source'
command. The subclass should override the attribute 'mydir' in order for the
runtime to locate the individual test cases when running as part of a large
test suite or when running each test case as a separate Python invocation.
The doc string provides more details about the setup required for running a
test case on its own. To run the whole test suite, 'dotest.py' is all you
need to do.
o subdirectories of 'test'
Most of them predate the introduction of the python test suite and contain
example C/C++/ObjC source files which get compiled into executables which are
to be exercised by the debugger.
For such subdirectory which has an associated Test*.py file, it was added as
part of the Python-based test suite to test lldb functionality.
Some of the subdirectories, for example, the 'help' subdirectory, do not have
C/C++/ObjC source files; they were created to house the Python test case which
does not involve lldb reading in an executable file at all.
The sample_test directory contains examples of both a full and an "inline"
testcase that run a process to a breakpoint and check a local variable. These
are convenient starting points for adding new tests.
o make directory
Contains Makefile.rules, which can be utilized by test cases to write Makefile
based rules to build binaries for the inferiors.
By default, the built executable name is a.out, which can be overwritten by
specifying your EXE make variable, via the Makefile under the specific test
directory or via supplying a Python dictionary to the build method in your
Python test script. An example of the latter can be found in
test/lang/objc/radar-9691614/TestObjCMethodReturningBOOL.py, where:
def test_method_ret_BOOL_with_dsym(self):
"""Test that objective-c method returning BOOL works correctly."""
d = {'EXE': self.exe_name}
self.buildDsym(dictionary=d)
self.setTearDownCleanup(dictionary=d)
self.objc_method_ret_BOOL(self.exe_name)
def test_method_ret_BOOL_with_dwarf(self):
"""Test that objective-c method returning BOOL works correctly."""
d = {'EXE': self.exe_name}
self.buildDwarf(dictionary=d)
self.setTearDownCleanup(dictionary=d)
self.objc_method_ret_BOOL(self.exe_name)
def setUp(self):
# Call super's setUp().
TestBase.setUp(self)
# We'll use the test method name as the exe_name.
self.exe_name = self.testMethodName
# Find the line number to break inside main().
self.main_source = "main.m"
self.line = line_number(self.main_source, '// Set breakpoint here.')
The exe names for the two test methods are equal to the test method names and
are therefore guaranteed different.
o plugins directory
Contains platform specific plugin to build binaries with dsym/dwarf debugging
info. Other platform specific functionalities may be added in the future.
o unittest2 directory
Many new features were added to unittest in Python 2.7, including test
discovery. unittest2 allows you to use these features with earlier versions of
Python.
It currently has unittest2 0.5.1 from http://pypi.python.org/pypi/unittest2.
Version 0.5.1 of unittest2 has feature parity with unittest in Python 2.7
final. If you want to ensure that your tests run identically under unittest2
and unittest in Python 2.7 you should use unittest2 0.5.1.
Later versions of unittest2 include changes in unittest made in Python 3.2 and
onwards after the release of Python 2.7.
o dotest.pl
In case you wonder, there is also a 'dotest.pl' perl script file. It was
created to visit each Python test case under the specified directory and
invoke Python's builtin unittest.main() on each test case.
It does not take advantage of the test runner and test suite functionality
provided by Python's unitest framework. Its existence is because we want a
different way of running the whole test suite. As lldb and the Python test
suite become more reliable, we don't expect to be using 'dotest.pl' anymore.
Note: dotest.pl has been moved to the attic directory.
o Profiling dotest.py runs
I used the following command line thingy to do the profiling on a SnowLeopard
machine:
$ DOTEST_PROFILE=YES DOTEST_SCRIPT_DIR=/Volumes/data/lldb/svn/trunk/test /System/Library/Frameworks/Python.framework/Versions/Current/lib/python2.6/cProfile.py -o my.profile ./dotest.py -v -w 2> ~/Developer/Log/lldbtest.log
After that, I used the pstats.py module to browse the statistics:
$ python /System/Library/Frameworks/Python.framework/Versions/Current/lib/python2.6/pstats.py my.profile
o Writing test cases:
We strongly prefer writing test cases using the SB API's rather than the runCmd & expect.
Unless you are actually testing some feature of the command line, please don't write
command based tests. For historical reasons there are plenty of examples of tests in the
test suite that use runCmd where they shouldn't, but don't copy them, copy the plenty that
do use the SB API's instead.
The reason for this is that our policy is that we will maintain compatibility with the
SB API's. But we don't make any similar guarantee about the details of command result format.
If your test is using the command line, it is going to have to check against the command result
text, and you either end up writing your check pattern by checking as little as possible so
you won't be exposed to random changes in the text; in which case you can end up missing some
failure, or you test too much and it means irrelevant changes break your tests.
However, if you use the Python API's it is possible to check all the results you want
to check in a very explicit way, which makes the tests much more robust.
Even if you are testing that a command-line command does some specific thing, it is still
better in general to use the SB API's to drive to the point where you want to run the test,
then use SBInterpreter::HandleCommand to run the command. You get the full result text
from the command in the command return object, and all the part where you are driving the
debugger to the point you want to test will be more robust.
The sample_test directory contains a standard and an "inline" test that are good starting
points for writing a new test.
o Attaching in test cases:
If you need to attach to inferiors in your tests, you must make sure the inferior calls
lldb_enable_attach(), before the debugger attempts to attach. This function performs any
platform-specific processing needed to enable attaching to this process (e.g., on Linux, we
execute prctl(PR_SET_TRACER) syscall to disable protections present in some Linux systems).

Some files were not shown because too many files have changed in this diff Show More