Hi, Im currently working on a project which uses different language settings. To solve this a table is used to store all the texts in different languages that are used in the program. So whenever a text is about to be written on the screen this table is called and depending on what the current language setting is a text string is returned. I recently joined this project and I noticed that the way of storing this wasn't very optimized and for every new language that was added the time it would take to look up the correct string would increase. I therefore came up with a (in my mind) better solution. However, when I tried to implement it I ran into the problem of getting an error that too much memory is used and I don't understand why. I am using IAR enbedded workbench.
The original solution in pseudo/c++ code:
Code C++ - [expand] |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| typedef struct
{
enum textId;
enum language;
string textString;
} Text;
static const Text s_TextMap[] =
{
{ TextId::RESTORE_DATA_Q ,Language::ENGLISH ,"Restore Data?" },
{ TextId::RESTORE_DATA_Q ,Language::SWEDISH ,"Återställa data?" },
{ TextId::RESTORE_DATA_Q ,Language::GERMAN ,"Wiederherstellen von Daten?" },
{ TextId::CHANGE_LANGUAGE ,Language::ENGLISH ,"Change Language" },
{ TextId::CHANGE_LANGUAGE ,Language::SWEDISH ,"Välj språk" },
{ TextId::CHANGE_LANGUAGE ,Language::GERMAN ,"Sprache wählen" },
}; |
My solution in pseudo/c++ code:
Code C++ - [expand] |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| typedef struct
{
const char* pEngText;
const char* pSweText;
const char* pGerText;
} Texts;
static Texts addTexts(const char* pEngText, const char* pSweText, const char* pGerText)
{
Texts t;
t.pEngText = pEngText;
t.pSweText = pSweText;
t.pGerText = pGerText;
return t;
}
typedef struct
{
TextId::e textId;
Texts texts;
} Text;
static const TextTest s_TextMapTest[] =
{
{TextId::RESTORE_DATA_Q, addTexts("Restore Data?","Återställa data?","Wiederherstellen von Daten?")},
{TextId::CHANGE_LANGUAGE, addTexts("Change Language","Välj språk","Sprache wählen")},
}; |
My solution is obviously faster to lookup in the average case and based on my calculations it should also use less memory. When the full tables are used I've calculated that the original solution requires 7656 bytes and that my solution requires 4224 bytes. However, when I try to compile the code I get linking errors saying:
Error[Lp011]: section placement failed
unable to allocate space for sections/blocks with a total estimated minimum size of 0x130301 bytes (max align 0x1000) in <[0x0000a000-0x0007ffff]> (total uncommitted space 0x757eb).
Error[Lp011]: section placement failed
unable to allocate space for sections/blocks with a total estimated minimum size of 0x47de4 bytes (max align 0x20) in <[0x1fff0000-0x2000fff0]> (total uncommitted space 0x1fff1).
Error[Lp021]: the destination for compressed initializer batch "USER_DEFAULT_MEMORY-1" is placed at an address that is dependent on the size of the batch, which is not allowed when using lz77 compression. Consider using "initialize by copy with packing = zeros" (or none) instead.
Error[Lp021]: the destination for compressed initializer batch "USER_DEFAULT_MEMORY-1" is placed at an address that is dependent on the size of the batch, which is not allowed when using lz77 compression. Consider using "initialize by copy with packing = zeros" (or none) instead.