Game Engine part 2

Welcome back! I consider the very first thing a project needs to be somewhat proper debugging facilities, so we’ll start with developing a logging system. I kind of like the std::ostream message formatting style, so I’ll make something that resembles an ostream object. So let’s make a list of the things we’d like to support:

1. User friendly (writing to a log should be painless, short and threadsafe and reading them should be informative)
2. Automatic adding of surrounding information (timestamps, originating file/line)
3. (Bonus) Tweakable performance (log levels, debug/release presets, background I/O)

I’d like the logging system to be able to handle the following code:

int i = 5;
float f = 3.0f;

logger << "value 1: " << i << "\nvalue 2: " << f << std::endl;

The first thing you'd notice for something like that is that the << operator would have to be provided for multiple types. In the example above, we encounter 3 types of value classes (char*, int, float) and an iostream manipulator function (std::endl). Let's start by creating a class that just contains a buffer to accumulate the entire message:

// ---- Header ----
class Logger {
public:
	LogMessage operator()();

	void flush(const LogMessage& message);
};

class LogMessage {
...
private:
	friend class Logger; // only allow the Logger class to instantiate a LogMessage

	LogMessage(Logger* owner);
public:
	~LogMessage();

private:
	Logger* mOwner; // the Logger object is guaranteed to outlive the message, so a raw pointer is fine
};

// ---- Implementation ----
LogMessage Logger::operator()() {
	return LogMessage(this);
}

void Logger::flush(const LogMessage& message) {
	std::string asString = message.mBuffer.str();
	std::cout << asString << "\n"; // auto-append newlines
}

LogMessage::LogMessage(Logger* owner):
	mOwner(owner)
{
}

LogMessage::~LogMessage() {
	mOwner->flush(*this);
}

Using a LogMessage via the Logger front end now creates a temporary object that accumulates values and then flushes itself upon destruction. There are a couple of things that can go wrong here, especially when the LogMessage is moved around -- partially constructed objects should not flush themselves. This is just a small fix:

// ---- Header ----
class LogMessage {
...
	LogMessage(LogMessage&& msg);
	~LogMessage();

// ---- Implementation ----
LogMessage::LogMessage(LogMessage&& msg):
	mBuffer(std::move(msg.mBuffer)),
	mOwner(std::move(msg.mOwner))
{
	msg.mOwner = nullptr;
}

LogMessage::~LogMessage() {
	if (mOwner)
		mOwner->flush(*this);
}

Now we're ready to start adding more conveniences. There are a couple of standard preprocessor definitions that would be nice - __FILE__ and __LINE__ come to mind. However, using preprocessor macros to yield some kind of logging object would need to have some kind of static/global object to refer to. I don't take this lightly, but it *really* helps with making the logger as user-friendly as possible, so let's just get it over with. Also, create some facilities in the LogMessage to hold the __FILE__ and __LINE__ content.

// ---- Header ----
namespace overdrive {
	extern Logger gLogger; // global storage with external linkage
}

#define gLog ::overdrive::gLogger(__FILE__, __LINE__)

class Logger {
...
	LogMessage operator()(const std::string& file, int line);
	void flush(const LogMessage& message);

class LogMessage {
...
	LogMessage(const std::string& file, int line, Logger* owner);

	struct Meta {
		std::string mFile;
		int mLine;
	};

	Meta mMeta;

Let's adress the logger output next -- for now we've been writing to std::cout, but a *real* logger would log to a file. In fact, I'd like to set up the Logger class so that, in debug mode at least, it writes both to a file and std::cout. So here the problem is that we would like to use multiple implementations of an interface -- polymorphism. In fact there are many ways to do this, but I like the encapsulated polymorphism approach first presented (afaik) by at Going Native. Let's start with a class that has an internal base class for the interface we're trying to support:

class LogSink {
public:
	void operator()(const LogMessage::Meta& meta, const std::string& message);

private:
	struct Concept {
		virtual ~Concept() {}
		virtual void operator()(const LogMessage::Meta& meta, const std::string& message) = 0;
	};

	std::unique_ptr mWrapper;
};

By specifying the interface as an operator() we'll be able to support lambda's as well. Now, we can add a templated constructor that can be used to implicitly create an implementation of the interface. There are some issues with copying the object though, so we'll also need to add explicit cloning operations. Taken together, this is all about properly forwarding stuff.

...
	struct Concept {
		virtual ~Concept() {}
		virtual Concept* clone() const = 0;
		virtual void operator()(const LogMessage::Meta& meta, const std::string& message) = 0;
	};

	template 
	struct Model: Concept {
		Model(T impl):
			mImpl(std::move(impl))
		{
		}

		virtual Concept* clone() const override {
			return new Model(mImpl);
		}

		virtual void operator()(const LogMessage::Meta& meta, const std::string& message) {
			mImpl(meta, message);
		}

		T mImpl;
	};

	template 
	LogSink(T impl):
		mImpl(new Model(std::move(impl)))
	{
	}

	LogSink(const LogSink& sink):
		mWrapper(sink.mWrapper->clone())
	{
	}

	LogSink& operator = (LogSink sink) {
		// the function signature already specifies a copy, so we only need to move the result from the temporary
		mWrapper = std::move(sink.mWrapper);
		return *this;
	}

By using this nifty approach, we can wrap all objects that conform to the interface specified in the Concept structure. It also gained value semantices, which is extremely nice to have. The downside is that each and every copy does a memory allocation. There are ways around that, but they make the implementation a lot more complex. Until it really becomes a problem, I'm going to just roll with it as it is. Let's use the polymorphic wrapper to create a sink that outputs the message to the standard output just like we've been using:

LogSink makeConsoleSink() {
	return [](const LogMessage::Meta& meta, const std::string& message) {
		std::cout << message << "\n";
	};
} // really, that's it! (it's implicitly converted to a LogSink)

As the standard output requires no additional state, a single lambda does everything we want. Writing to a file does require some extra state though, so let's write something with internal state for that:

struct FileSink {
	FileSink(const std::string& filename):
		mFile(std::make_shared(filename))
	{
		if (!mFile.good()) {
			std::string message = "Failed to open file sink: ";
			message.append(filename);
			throw std::runtime_error(message);
		}
	}

	void operator()(const LogMessage::Meta& meta, const std::string& message) {
		(*mFile)
			<< message
			<< " ("
			<< meta.mFile
			<< ":"
			<< meta.mLine
			<< ")\n";
	}

	std::shared_ptr mFile; // by using shared_ptr this can be copied around
};

LogSink makeFileSink(const std::string& filename) {
	return FileSink(filename); // implicitly converted to LogSink
}

This way, the message written to the file contains some extra information, while leaving the standard output relatively bare. When working with logs I like to have minimal reporting straight up, with additional information whenever I would like to see it, and this is one way to do just that.

The final upgrade worth discussing is to offload the I/O stuff to a separate thread. Writing messages should ideally not interfere with the rest of the program, and I/O operations are vastly slower than other operations. An approach I like is to use an Active Object to handle the writing -- an object that wraps calls to operations on a different thread. A couple of years ago, Herb Sutter wrote about how to implement one, and we'll use a small variation on that.

class Active {
private:
	// only allow creation via factory method
	Active():
		mRunning(true)
	{ 
	}

public:
	typedef std::function Callback;

	// don't copy this around
	Active(const Active&) = delete;
	Active& operator = (const Active&) = delete;

	~Active() {
		send([this] { mRunning = false; });
		mThread.join();
	}

	static std::unique_ptr create() {
		std::unique_ptr result(new Active);

		result->mThread = boost::thread(&Active::run, result.get());

		return result;
	}

	void send(Callback message) {

	// this should almost always be called with [=] {...} style lambda's
	// (in order to ensure that the data being operated on in the other
	// thread is still present)

		mMessageQueue.push_back(std::move(message));
	}

private:
	void run() {
		while (mRunning) {
			Callback fn;
			
			mMessageQueue.wait_pull_front(fn);

			fn();
		}
	}

	bool mRunning;
	boost::sync_queue mMessageQueue;
	boost::thread mThread;
};

There we have it -- a userfriendly, threadsafe logging facility with multiple backends. The complete source can be downloaded here.

[edit] Gah... there's an issue in visual studio with joining threads in the destructor of global objects, sometimes causing unexpected deadlocks. Effectively this means you shouldn't use std::thread in any sort of global object (static, extern or plain global). It seems that boost::thread doesn't have this issue though, so use that instead.

[edit] I received some feedback and made a small change to the code -- replacing std::forward instances with std::move for clarity. Thanks kiokko89 🙂

6 thoughts on “Game Engine part 2

    1. I have considered and rejected boost::log – I found it to be too difficult to use for what it does.

      The logging system described in this tutorial is far simpler (and has far less features), which I prefer for a project of this type. Were I to develop a webservice or something to that effect, the boost::log library may provide non-trivial functionality that is hard to get right, but I don’t expect to need it.

  1. I can’t compile your example on Arch linux, I have tried with g++ and clang++.

    The only solution I found was to change “mMessageQueue.push_back” into “mMessageQueue.push” and “mMessageQueue.wait_pull_front” into “mMessageQueue.wait_pull”.

      1. The type of the message queue is a boost::sync_queue, which doesn’t have an emplace_back. You can find a list of boost::sync_queue functions here or at the regular boost documentation. Note that a boost::sync_queue is still considered somewhat experimental, when I get the chance I’ll update the tutorial and probably roll my own.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.