The issue with global singletons comes down to code maintenance, architecture, and thread-safety.
In python, unless you're using Stackless Python, the GIL(Global Interpreter Lock: https://wiki.python.org/moin/GlobalInterpreterLock) does help guarantee thread-consistency (as opposed to safety), so this is less of an issue in Python than other languages.
However, what you need to ask when you're looking at singletons is why?
What can a global singleton give you that a static method or class or cache can't? What about using a Factory design pattern to properly handle creating DB resources?
The problem with a singleton in use is that an object being a singleton is non-obvious, while static classes or methods or caches are much more obvious in how they can be used and interacted with.
Consider the following issue:
You decide to use a singleton in a library. Six months later, you find a user story where you need multiples of that singleton - but you've forgotten its a singleton. You try and create copies, instantiate new ones, etc etc and keep running into all these weird defects.
This is where singletons become a code maintenance issue, and until you've suffered the costs and frustrations brought about by this, its hard to understand why singletons are such a big code smell issue.
Imagine even worse, that you don't have the source code for the library, the documentation has been lost, and you didn't write it. How would you know a returned object was a singleton?
(I have experienced the above at a job previously... that was... fun for certain definitions of fun.)
I'd not be concerned about performance here, but about usability.
Modules are loaded just once in a Python program's lifetime. Referencing elements within modules and packages, including nested modules, is virtually free. Performance doesn't matter here.
Use modules to group your project into logical units; it is more readable and maintainable if things that are closely related conceptually are clustered within a namespace.
Best Answer
Having understood why Unicode is necessary (recommended reading: link, link :thanks @DanielB for the excellent links) and that character encodings are what computers use to represent real world characters, it becomes clear that when Python is reading or writing bytes representing text from a stream (which can be a file, a pipe, a socket, ...), it needs to know which character encoding is being used so those bytes are meaningful as human readable text.
Python uses
ascii
as the default encoding. Theascii
encoding is inadequate for almost all languages (including English: there's no£
inascii
!): so you need to specify an alternate encoding when writing to streams if you intended to use any characters not part ofascii
.When reading from streams, you need to know which encoding was used to write to the stream and use the same encoding to read from it, otherwise the decoded result will be wrong. Try writing a string with Cyrillic characters with the
ISO-8859-5
codec and reading it back with theUTF-8
codec: you'll see they don't match, because different byte sequences mean different characters in the two encodings.So to answer your specific question,
is actually opening the file for writing using the
ascii
codec implicitly. If you want to specify another codec, you need to either use theencoding
parameter of theopen
function in Python 3.x:or if you are still using Python 2.x (the latest is Python 2.7.3 at the time of this writing), you need to use the functions from the codecs module:
since
open
in Python 2.x doesn't allow you to specify an encoding (you can use it to read the byte stream into a byte string in memory though, and then decode that string).