17 March 2021

Thoughts on Python logging

How to use logging in your Python project without being annoyed by libraries that do logging

During development, or when dealing with complex software, you often put logger.debug() statements in your code.  (This assumes you've run logger = logging.getLogger("blah") or equivalent.)  This method allows you to see what's going on deep in your program to discover what isn't working as expected.  Unlike print statements, it has the advantage of being able to change the log level on the fly, change formatting and even use log files easily; the Logging HOWTO talks about this.

However, this approach has a problem, and it has to do with libraries.  One of the great strengths of Python is that it's very easy to install external libraries (either with pip or your OS's package manager) and add them to your program.  However, library developers need to debug their code too and that's where you can run into conflicts with logging.

For example when developing a program that uses the boto3 library to call the AWS API, this is one of the many messages you'll see if you run logging.basicConfig(level=logging.DEBUG) so you can see your code's own debug messages:

DEBUG:botocore.loaders:Loading JSON file: /usr/lib/python3/dist-packages/boto3/data/ec2/2016-11-15/resources-1.json

There are also stacks of messages from modules such as botocore.endpoint and urllib3.connectionpool including all HTTP requests and responses.  This is a potential security risk as well as being annoying.

Explainer -- numeric log levels

Each of the level constants and the corresponding convenience functions from the logging module represent a numeric value.  This is similar in concept to syslog priorities from Unix.


LevelFunction
Value

DEBUG
10

INFO
20

WARNING
30

ERROR
40

CRITICAL
50

If the current logging threshold (default WARNING) is numerically greater than that of the function you called, the message will get ignored; if it's less than or equal, the message will be processed.  Note that the functions are available in the logging module as well as any logger object you create.

Alternative approach -- module-based threshold tweaking

This approach is a bit finicky; it involves bumping up the logging threshold of various libraries so that their own debug messages get filtered out.  This takes advantage of the Python logging module's hierarchical nature, which is that submodules inherit their parent modules' logging threshold.  (It actually works via the name of the logger so technically it's a separate string-based hierarchy with levels separated by dots.)

For example:

_bl = logging.getLogger("botocore")
_bl.setLevel(logging.DEBUG + 1)

You can think of this as raising a wall around this module (and its submodules) slightly higher than the wall that represents the global logging threshold, so debug messages can't get out.  Other modules can still emit debug messages if the global logging threshold is set to logging.DEBUG .  All info messages will get out regardless, because their numeric value is higher than logging.DEBUG + 1 .

However, this approach falls down when you find yourself playing whack-a-mole with all the different module hierarchies of all the libraries your project uses.

Smart logging

Instead of using logger.debug() you can use logger.log(level, msg) .  You would normally pass logging.DEBUG as the level for example, because a given log message's level shouldn't vary.  (Hence why you probably use the convenience functions instead.)  In the event of an error condition, for example, you'd just use conditional logic in your code emit a different message with a different level.

The essence of this approach is that once you start using logger.log()you can define your own log level constants and use these instead of the ones in the logging module.  For example, in the root of your package, you could create a file called logging.py as follows:

import logging

DEBUG = logging.DEBUG + 1
INFO = logging.INFO + 1
NOTICE = logging.INFO + 5
# etc.

Then your code can first use import xyz.logging (assuming your package is called xyz) and then use statements like logger.log(xyz.logging.DEBUG, msg) .  This way your messages' level will be above the global logging threshold and so will be emitted if you have run logging.basicConfig(level=xyz.logging.DEBUG), unlike debug level messages from libraries.  Personally, I think replacing all logger.debug() statements in your code is less of a pain than playing setLevel whack-a-mole with libaries.

This also works at different metaphorical levels (regarding intent rather than numerical value).  For example, boto3.resources emits INFO level messages describing each API request.  This is a bit of a pain because it essentially includes the JSON of the request body.  But if you use xyz.logging.INFO for both your info messages and the global logging threshold, these somewhat annoying Boto messages get filtered out.  If you set the global logging threshold to xyz.logging.DEBUG instead, Boto's INFO level messages implicitly get treated more like debug messages and its annoying debug messages get filtered out.

11 March 2018

Warren Ellis swearing at the Internet Of Things

Warren Ellis is famous for many things, including Transmetropolitan; two novels; the scripts for various video games, a movie starring Helen Mirren and a run of James Bond comics; and many other things.  He is very smart, and when the mood takes him, extremely sweary.  Both of these things make for humorous and insightful writing.

His talk at ThingsCon 2015 was exemplary of him being both smart and sweary.  It is reproduced neatly here, and the original source is here.  (For more of this kind of writing, and everything from recipes, travelogues, technology reviews and musings on the history of English folk magic, as well as copious detail on his work (largely in the field of comics), subscribe to his newsletter: Orbital Operations.)

In summary, electronic device manufacturers and digital disruptors of any sort start needing to care more about the effects of the technology they are unleashing into the world.

Labels: ,

28 September 2017

Installing Webmin and Virtualmin from packages

Please note: this guide has not yet had a complete run-through from scratch.  If you hit problems, please e-mail alastair@plug.org.au

Do you want to install Webmin and Virtualmin from packages, i.e. not using the official install script?  The process is finicky, but can be done, as shown here.  One reason you might wish to to this is that you might not want the default "kitchen sink" installation and all the dependencies it brings in.  (Or you might not like how the official install script modifies /etc/apt/sources.list rather than adding a file called /etc/apt/sources.list.d/webmin.list .  Or you might be concerned about the fact that the install script is served from a non-HTTPS URL and is therefore open to modification in transit or other security breaches.  Et cetera.)

The end result of this process is a server that uses PHP, fcgid and Apache.  Unlike a standard Virtualmin installation, it is a "bare bones" server that can be used for web hosting only.  It assumes that other features like e-mail hosting and DNS are provided by the cloud or separate servers.
This guide assumes that you are using Debian or Ubuntu.  It might be possible to apply some of the instructions below on other systems, but many won't work unless substitute commands are used. Note: older versions of Debian (wheezy and earlier) and Ubuntu (precise and earlier) don't have the apt command; you can use aptitude instead.  (FYI, aptitude is installable in recent OS releases if you prefer it.)

Install Linux

This is out of scope for this guide.

Linux Preparation

As an alternative to the steps in the sub-sections below, you can install the required packages manually, but there are a lot of them to remember.  http://al.id.au/svn/tools/debian/apache-fcgid/  has a script called setup-fcgid, but note that it currently works only on older releases of Ubuntu (pre Xenial) and Debian (pre Stretch).

Setup LAMP and Postfix

  1. Run sudo tasksel
  2. Move text cursor to "LAMP server" and press Enter
  3. Move text cursor to "Mail server" and press Enter
    • Virtualmin won't work without Postfix etc.
    • Note that Webmin's Exim support is listed as experimental, so on Debian (where the default MTA is Exim) you might want to install Postfix instead if you have no preference
  4. Press Tab to go to the <Ok> button and press Enter
  5. When prompted to set a MySQL server password, do so unless you will be uninstalling MySQL as per the next sub-section; in this case, you will be prompted multiple times, so press Enter each time
  6. When the Postfix configuration dialog pops up, read the intro carefully and then choose the option that applies to you

Install extra packages

If you are running Ubuntu Xenial (or newer) or Debian Stretch (or newer), run this command:
sudo apt install libapache2-mod-fcgid apache2-suexec-custom php-cgi
Otherwise, run this command:
sudo apt install libapache2-mod-fcgid apache2-suexec-custom php5-cgi

Alter the PHP configuration

This is needed to prevent Apache's built-in PHP module from trying to run PHP files.

If you are running Ubuntu Xenial (or newer) or Debian Stretch (or newer), run this command:
sudo rm /etc/apache2/mods-enabled/php7.0.conf
Otherwise, run this command:
sudo rm /etc/apache2/mods-enabled/php5.conf

Remove MySQL server if necessary

Note: Only do this if you'll be using a separate database server.
  1. Run sudo apt purge mysql-server mysql-server-5.7 mysql-server-core-5.7

Webmin and Virtualmin Packages

Keys

  1. Run mkdir /tmp/virtualmin-keys
  2. Run wget -N -P /tmp/virtualmin-keys http://software.virtualmin.com/lib/{RPM-GPG-KEY-webmin,RPM-GPG-KEY-virtualmin,RPM-GPG-KEY-virtualmin-6}
  3. Run shasum -a 256 /tmp/virtualmin-keys/{RPM-GPG-KEY-webmin,RPM-GPG-KEY-virtualmin,RPM-GPG-KEY-virtualmin-6}
  4. Confirm that you get the output shown below.  (This is necessary because software.virtualmin.com doesn't have a valid HTTPS certificate and thus only works from the non-HTTPS URL.  Therefore you are required to perform extra checking to make sure you'll get the real packages.)
Required output:
36a563bec98a9894065d5f45fbfe58ef51985aecc561569e6288b009ef28f251  /tmp/virtualmin-keys/RPM-GPG-KEY-webmin
08294ec28b36249adf8d51352153235d2467b081d2f3e133771da9100b6fbc81 
/tmp/virtualmin-keys/RPM-GPG-KEY-virtualmin
d8bd1baa45a96a837efe1cd535f8a9325aff18751e8571cf3e792c5ea3ffb039  /tmp/virtualmin-keys/RPM-GPG-KEY-virtualmin-6

APT config

  1. Run sudoedit /etc/apt/sources.list.d/webmin.list
  2. Paste in the contents shown below
    1. Note: the lines may appear to contain a single URL, but they don't, so don't remove any spaces
  3. Modify the two placeholders as follows:
    1. {{OS}} -- debian or ubuntu
    2. {{RELEASE}} -- the codename of your Distro release
      • You can find this by running lsb_release --short --codename
      • For stretch, use jessie instead (the directory for stretch is missing from the server) 
This is the contents of the file mentioned above:
deb http://software.virtualmin.com/gpl/{{OS}}/ virtualmin-{{RELEASE}} main
deb http://software.virtualmin.com/gpl/{{OS}}/ virtualmin-universal main

Installation 

  1. Run sudo apt-get update
  2. Run sudo apt install webmin-virtual-server virtualmin-base

Configuration

First, configure the apache2-suexec-custom package:
  1. Run sudoedit /etc/apache2/suexec/www-data
  2. Change the first line to /home
  3. Save and quit (press Ctrl-X)

Next, log into Webmin and configure it:
  1. Go to https://host.address:10000/
    • Replace host.address with the IP Address or full host name of your server
  2. Log in with your normal Unix account
  3. In the left-hand menu, click on Servers and then Apache Webserver
  4. Click on the Global configuration tab in the right-hand section
  5. Click on Configure Apache Modules
  6. Tick the actions, fcgid and suexec boxes
  7. Click [Enable Selected Modules]
You can now click on Virtualmin at the top of the left-hand menu and create "Virtual Servers", which are simply managed Apache vhosts.

Caveats

If you want to host a site named after the primary address of the server, it might not work, because the Apache default vhost will try to serve those requests instead.  (This only happens if the server's hostname is configured to match the primary address.)

To fix this, do this:
  1. Run sudoedit /etc/apache2/sites-available/000-default.conf 
  2. Uncomment the ServerName directive
    • The value should be nonsense, as the point is to set it to something that will never match a real vhost
  3. Save and quit (press Ctrl-X)
  4. Run sudo service apache2 reload

Labels: ,

01 September 2017

Resizing a disk on a cloud compute instance (AWS or Google)

Resizing a disk on a cloud compute instance

Diagnosis

If you get an error when installing/upgrading a package that looks like this:

dpkg: error processing archive /var/cache/apt/archives/linux-image-3.13.0-62-generic_3.13.0-62.102_amd64.deb (--unpack):
 cannot copy extracted data for './boot/vmlinuz-3.13.0-62-generic' to '/boot/vmlinuz-3.13.0-62-generic.dpkg-new': failed to write (No space left on device)
...then your root file system is full. This can also be checked like this:
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            129G   129G   0M 100% /
If this has happened, one way to proceed is to add a new file system and move data to it (e.g. so that /home is a separate file system that is mounted into the tree but not using any of the allocated space on the root file system any more).  This method is not covered here.

Another way is to expand the size of the existing root file system.  That is what this article covers.  The phases below apply to any file system that has become full.  (No longer do you need to shut down the instance or unmount the file system, snapshot the volume and create a new, larger one from the snapshot.)

How to fix 

Warning: A mistake or mis-interpretation when following these steps could destroy your data.  Please use caution, and unless you are an intermediate or above Linux user, consider asking a system administrator who is familiar with the tools presented to perform the task.

Phase 1 -- Expand virtual disk

If you are using Amazon Web Services:
  1. Log in to the AWS console
  2. Make sure you are in the EC2 service section
  3. Click on Volumes in the left-hand menu
  4. Find the volume that is attached to your instance (sort by the Attachment column)
    • Ensure you have found the correct one if that instance has multiple volumes
  5. Right-click on the volume and choose Modify
  6. Alter the size
  7. Click [Save]
 If you are using Google Compute Engine:
  1. Log in to the console
  2. In the drop-down menu, choose the project that your compute instance belongs to
  3. In the Resources box, click on Compute Engine
  4. Click on the name of your instance in the list
  5. Scroll down to the "Boot disk and local disks" section
  6. Click on the relevant disk
  7. Click [EDIT] in the toolbar
  8. Change the Size value
  9. Click [Save]
Other virtualisation systems (like VMware) or storage management systems (like LVM), may allow you to expand a disk volume.  If your host uses one of these systems, research and apply whatever method is available to you, then continue with the next phases.

Phase 2a -- Expand partition

In the steps below, replace /dev/sda with the device name of the root file system, minus the partition number.
  1. Log in via SSH 
  2. Run lsblk to determine the device of the root file system
    • See examples below
    • The device name is given in the first column, and in this case is not preceded with "/dev/"
    • The relevant disk or partition is the one with "/" in the MOUNTPOINT column
  3. If the root file system's device is a disk (not a partition) skip to "Phase 3 -- Expand file system"
    • Virtual disks under Linux have device names like "/dev/sda" or "/dev/xvdf"
    • Partitions are a way of subdividing disks; they have device names like "/dev/sda1" or "/dev/xvdf2"
    • A file system can reside on a partition or use the whole disk; in the latter case there is no partition table and so trying to list/manipulate it won't work
  4. Tell Linux to detect the new disk size
    1.  N/A -- this is done automatically by the kernel
  5. Install parted
    1.  Run "sudo apt-get update ; sudo apt-get install parted"
  6. Run parted to expand the partition
    1. Run "sudo parted /dev/sda"
      • Observe the note above about /dev/sda
    2. Check if there is more than one partition on the disk:
      1. At the parted prompt, Run "print list"
        • Take note of the disk size displayed on the second line of output
      2. If the root file system's partition (see step 2) is not the last one on the disk, follow the steps at "Phase 2b -- Moving and expanding partitions" and then return here.
    3.  At the parted prompt, Run "resize 1 start size"
      • If appropriate, replace "1" with the root file system's partition number
      • Replace "start" with the partition's existing Start value from the table show by "print list" above
      • Replace "size" with the disk size value noted above
    4. Follow the instructions provided by parted
    5. At the parted prompt, Run "quit"
  7. Tell Linux to update its view of the partitions
    1.  Run "sudo kpartx -u /dev/sda"
      • Observe the note above about /dev/sda
lsblk examples
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0    0  15G  0 disk /
 or:
$ lsblkNAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  10G  0 disk
└─sda1   8:1    0  10G  0 part /


Phase 2b -- Moving and expanding partitions

Warning: Skip to Phase 3 unless Phase 2a directed you to follow these steps.

TBA

Phase 3 -- Expand file system

This is unnecessary if your root file system's device is a partition, as you will have resized both in Phase 2.

Firstly, run "df -hT" and take note of the value for the root file system in the Type column.

If the file system type is ext4, follow these steps:
  1. Run "sudo apt-get update ; sudo apt-get install e2fsprogs"
  2. Run "sudo resize2fs /dev/sda"
    • Observe the note in Phase 2a about /dev/sda
  3. Follow the instructions provided
 If the file system type is xfs, follow these steps:
  1. Run "sudo apt-get update ; sudo apt-get install xfsprogs"
  2. Run "sudo xfs_growfs /dev/sda"
    • Observe the note in Phase 2a about /dev/sda
  3. Follow the instructions provided
If the file system type is rootfs, look for the other line in the df output that is also mounted on "/" (don't ask me why there's two) and follow the steps above for that file system type.

If  the file system type is none of the above, do a web search for the type and the word "resize".

Phase 4 -- Verification

Run "df -h" and you should see that you now have plenty of free space on your root file system.

Labels: , ,

06 March 2017

Dynamically generating host entries for SSH client config

I will at some point in the future blog about how cool Ansible inventory files are, and how I re-purposed them for Shepherd.
But for now, given that I have all these Ansible inventory files lying around, why not use them to generate my ssh config file, instead of having to update it by hand every time I add a new host to my collection?

Background: the command-line OpenSSH client reads both /etc/ssh/ssh_config and ~/.ssh/config to get the default values for options as well as any host-specific options.  This means that you can define the hosts that you'll be logging into using friendly names.  You can also set global settings, or just settings for specific groups of hosts using wildcards.

I use a Makefile that reads all the inventory files that I have specified and converts them into host entries.  It combines these with the contents of all the files in ~/.ssh/config.d (this is non-standard) and creates ~/.ssh/config .  Note that you can't edit this file manually any more; if you do, either your changes will be overwritten or it will prevent updates to one of the source files from being used (if the target file is newer than the source, which is how make determines if a dependency has changed and therefore the target has to be rebuilt).

Here is a sanitised version of my Makefile:
# User options
CREATED_FILES=xyz.conf mine.conf
LOGLEVEL_TWEAK_REGEXP=xyz.net

.PHONY: all
all: config

# User dependencies
tmp/xyz.conf: ~/Work/XYZ/Hosts/hosts.txt
tmp/mine.conf: ~/Transfer.unison/Geekery/Hosts/hosts.txt


# Actual targets and variables
CREATED_FILE_PATHS = $(CREATED_FILES:%=tmp/%)
config: tmp/000_header.conf $(CREATED_FILE_PATHS) config.d/*.conf
        if [ ! -f $@.bak ] ; then cp -p $@ $@.bak ; fi
        for file in tmp/000_header.conf $(CREATED_FILE_PATHS) ; do cat $$file ; echo ; done > $@
        for file in config.d/*.conf ; do echo "# $(CURDIR)/$$file" ; cat $$file ; echo ; echo ; done >> $@

tmp:
        mkdir $@

$(CREATED_FILE_PATHS):
    echo "# $^" > '$@'
    sed -e '/^#/d' \
        -e 's/[[:space:]]*#.*//' \
        -e '/ansible_ssh_host=/ !d' \
        -e 's/\(cloud_provider\|cloud_region\|cloud_instance_id\)=\([^[:space:]]*\)[[:space:]]*//g' \
        -e 's/^\([^[:space:]]*\)[[:space:]]*alias=\([^[:space:]]*\)[[:space:]]*/Host \1 \2\n/' \
        -e 's/^\(Host [^\n]*\)[[:space:]]*alias=\([^[:space:]]*\)[[:space:]]*/\1 \2\n/' \
        -e 's/^\([^H][^[:space:]]*\)[[:space:]]*/Host \1\n/' \
        -e 's/ansible_ssh_host=\([^[:space:]]*\)[[:space:]]*/  HostName \1\n/' \
        -e 's/ansible_ssh_user=\([^[:space:]]*\)[[:space:]]*/  User \1\n/' \
        -e 's/ansible_ssh_private_key_file=\("[^"]*"\)/  PubkeyAuthentication yes\n  IdentitiesOnly yes\n  IdentityFile \1/' \
        -e 's/ansible_[^[:space:]]*=[^[:space:]]*[[:space:]]*//g' \
        -e 's/^\(Host [^\n]*\($(LOGLEVEL_TWEAK_REGEXP)\)[^\n]*\)/\1\n  LogLevel Error/' \
        -e 's/\n\([[:space:]]*HostName [^\n]*\($(LOGLEVEL_TWEAK_REGEXP)\)[^\n]*\)/\n  LogLevel Error\n\1/' \
        -e 's/$$/\n/' \
      '$^' >> '$@'

tmp/000_header.conf: | tmp
    echo '# see ssh_config(5)' > $@
    echo '#' >> $@
    echo '# DO NOT EDIT -- ANY CHANGES WILL BE OVERWRITTEN BY Makefile' >> $@

Usage 

To "run" the Makefile, use this command:
make -C .ssh
That's it.  You can run it as often as you like, and if no source files have been modified, the target file won't be re-created.

Options

CREATED_FILES is the list of files that are to be created in ~/.ssh/tmp before being combined together (with the other files) to make the target file.  You also have to provide dependency lines (these are a cut-down make rule) that specify which inventory file is used to make each temporary file.

LOGLEVEL_TWEAK_REGEXP allows you specify hosts for which the "LogLevel Error" option will be set, which prevents /etc/issue.net from being blatted onto your terminal every time you run scp or rsync.  The regexp is matched against both the host title (including aliases) and the SSH hostname.

Dependency lines

Note that each file in CREATED_FILES has a matching line (including "tmp/") that specifies the source file.  Think of these as a pair; every time you add or remove a file from CREATED_FILES you have to add or remove the dependency line.

Caveats

At least one file must be in .ssh/config.d -- but it can be empty.  I recommend having a file in this subdirectory called zzz_global.conf so its contents will come after all your host stanzas.  This the place to put your Host * stanza with its options, e.g. "ServerAliveInterval 30" which is great for preventing your ADSL modem from dropping connections.

If you don't have anything to put in LOGLEVEL_TWEAK_REGEXP, comment out the two lines that use it by putting "##" after the first single quote of each sed line.  Yes, sed suppports comments!

Labels: , , , ,

20 December 2016

Quick and dirty PHP autoloading using Composer

Overview

Why write your own PHP class autoloader when Composer provides a perfectly good one?  It supports both PSR-4 and PSR-0 Recommendations (a.k.a. PHP standards for how namespaces should be used with autoloading).  Fun fact: it's based on the Symfony framework's autoloader and has lots of optimisations and features built in.

Composer is normally used to install/update third party components into a PHP project, from Git repos or packagist.org .  But the same functionality that Composer uses to let your project access classes etc. from within those components, can also be used to access your project's own classes etc.  In other words, depending on how you configure it, the autoloader that's generated for your project by Composer will provide access to class files defined in both third party components and your project.

I made a repo called template-PHP-project that has a composer.json already configured for both PSR-4 class autoloading and an include path.

Usage

Simply clone template-PHP-project, rename the directory and then delete its .git subdirectory. Then you can add web files, class files etc. and install third party components using Composer. You can then run git init and commit all the files ready to push to a blank repo on your favourite Git hosting site.  Or use Mercurial... I won't judge you.

If you like, instead of removing .git just edit .git/config and change the origin. Or run git config remote."origin".url your-repo-url .

If you have an existing project, you're better off moving your files around to fit the structure below than trying to squeeze template-PHP-project's functionality into it.

Directory structure

You will get all of these files and directories by default (thanks to .keep files) except vendor/ (which is managed by Composer):
|-- app/
|   |-- classes/
|   `-- include/
|-- bin/
|   `-- demo.php
|-- composer.json
|-- composer.lock
|-- config/
|-- docroot/
|   `-- index.php
`-- vendor/

Composer config

First, edit composer.json and change XYZ to your project's namespace.  You'll have to create a subdirectory in classes with this name, and that's where you put your class files -- name them according to the standard.  (You can add multiple namespaces if you like.  Any of your project namespaces automatically support a hierarchy of sub-namespaces that are inferred from the subdirectories in the project namespace's subdirectory.)  Don't forget to leave the trailing backslashes in the composer.json definition.

Before your code will work, you'll have to get composer to create/update vendor/autoload.php and vendor/composer/ by running this command:
composer update
This has to be done every time you add a new class file.  Or when you modify the require statement.

Please test all your code after doing this as it will update versions and dependencies of third party components.  Don't forget to commit afterwards; see section below.

Note

The supplied composer.json doesn't use a fallback directory for looking up namespace files.  All namespaces are expected to be explicitly declared.

Reminder regarding what to commit

This also covers what to keep in mind when deploying code.
  1. Always commit composer.lock
    Note: this will be erased by Composer if you don't require any third party components
  2. Never commit  vendor/
  3. On your server, Docker container or whatever, you are supposed to clone the repo (or extract it from a tarball) and then run composer install. If the repo is already there, run git pull and then composer install.
    composer.lock
    will ensure you get all the correct, working, tested versions of third party components on every deploy.  (Nobody likes surprises in production.)
  4. Never run composer update on production.

Example

See demo.php for an example of a PHP script that uses both an autoloaded class (assumes namespace Demo has been configured in composer.json) and a file in the pre-configured include path.

This script relies on the following files (re-creating them is left as an exercise for the reader):
  • app/include/foo.inc
  • app/classes/Demo/Util.php

Other features

Include path

As mentioned, my composer.json sets up app/include/ as a place from which you can include files without needing to specify the path.  The composer docs state that this feature is deprecated, because ultimately with an autoloader, you can do everything with classes.  Therefore you shouldn't need to write include files as part of a new project, given that:-
  • defines can be replaced by class constants, e.g. Foo::FROB_WEIGHT
  • functions can be moved to public static methods in a class, e.g. Foo::measure()
  • global variables are stupid; seriously, use a static class variable if you need one, but please consider writing some accessor methods
All of the above replacements can take advantage of namespaces and autoloading.

Automatic includes

I'm not currently using Composer's files feature, but it's easy enough to add to the autoload block for projects that should include a file into every PHP page that uses the autoloader.

Labels:

22 November 2016

Why Git?

To give you an example of how helpful Git can be, I've modified an e-mail I sent to a client. They had customised a library I'd written for them, and I need to fix a bug both in my original version and the modified version of the include file they'd sent me. I did these steps:
  1. Create a local repo inside my source directory
  2. Commit all the files into the local Git database
  3. Create and switch to a new branch called 'test'
  4. Save their modified version of main.inc
  5. Commit the change in the 'test' branch
  6. Switch back to the 'master' branch
  7. Fix the bug and commit
  8. Switch to the 'test' branch
  9. Merge all changes from the 'master' branch
This automatically applies the change from step 7 and commits the change in the current branch. It might seem like a lot of work to apply one change that could easily be done again to a different version of the file, but consider that it scales; even complex changes can be merges with only cursory review required. (Unless both branches have changed in contradictory ways (i.e. same parts of the files have been changed) since the two branches diverged. In this case a merge conflict will happen, which is easy to resolve by "collapsing" each pair of conflicting changes that have been put next to each other in the files that couldn't be merged automatically. A commit is then necessary because obviously it couldn't be done when a conflict was detected.)

Labels: