It looks like you are having a design issue here: Tables should not extend the Database Abstraction Layer. Instead the DAL should be injected into the table as a dependency.
abstract class Table {
protected $dal;
public function __construct(DAL $dal) {
$this->dal = $dal;
}
// whatever else all tables have in common
}
class Table_User extends Table {
public function someMethod() {
$this->dal->someOtherMethod();
}
}
$table = new MyTable($dal);
$table->someMethod();
That way you will create a DAL at some upper scope and pass it down. This will also allow you to use multiple different database engines at the same time.
Additionally you obviously should not create your tables directly in your controller but let a specialized class do that. For example you could use a factory:
class TableFactory {
protected $dal;
public function __construct(DAL $dal) {
$this->dal = $dal;
}
public function createTable($name) {
$className = 'Table_' . $name;
return new $className($this->dal);
}
}
That way you can create a table factory at some point with an injected DAL and pass that table factory around.
$table = $factory->createTable('User');
These are phantom type parameters, that is, parameters of a parameterised type that are used not for their representation, but to separate different “spaces” of types with the same representation.
And speaking of spaces, that’s a useful application of phantom types:
template<typename Space>
struct Point { double x, y; };
struct WorldSpace;
struct ScreenSpace;
// Conversions between coordinate spaces are explicit.
Point<ScreenSpace> project(Point<WorldSpace> p, const Camera& c) { … }
As you’ve seen, though, there are some difficulties with unit types. One thing you can do is decompose units into a vector of integer exponents on the fundamental components:
template<typename T, int Meters, int Seconds>
struct Unit {
Unit(const T& value) : value(value) {}
T value;
};
template<typename T, int MA, int MB, int SA, int SB>
Unit<T, MA - MB, SA - SB>
operator/(const Unit<T, MA, SA>& a, const Unit<T, MB, SB>& b) {
return a.value / b.value;
}
Unit<double, 0, 0> one(1);
Unit<double, 1, 0> one_meter(1);
Unit<double, 0, 1> one_second(1);
// Unit<double, 1, -1>
auto one_meter_per_second = one_meter / one_second;
Here we’re using phantom values to tag runtime values with compile-time information about the exponents on the units involved. This scales better than making separate structures for velocities, distances, and so on, and might be enough to cover your use case.
Best Answer
It depends on how the compiler is designed.
It can make all symbols represented with their own full-qualified names into a binary-search tree, with a another set of "visibility scope" and a coverage table telling what is visible from where (hence the entire name structures is flattened) or it can somehow cope with your name structures or cope with your code block and scopes.
You can easily understand that the compiler performances differ a lot if the compiler developer did an orthogonal choice respect to the compiler user.
However the purpose of the code (apart the obvious doing what is required to do) is to be readable and maintainable, not to be easily compilable. If you plan to optimize compiling, you are focusing on the wrong side of software development, risking to find yourself in 5 years with a code that compiles fast, but you cannot touch anymore since you are anymore able to understand it.