Warning

This section contains snippets that were automatically translated from C++ to Python and may contain errors.

Contiguous Cache Example#

The Contiguous Cache example shows how to use QContiguousCache to manage memory usage for very large models. In some environments memory is limited and, even when it isn’t, users still dislike an application using excessive memory. Using QContiguousCache to manage a list, rather than loading the entire list into memory, allows the application to limit the amount of memory it uses, regardless of the size of the data set it accesses.

The simplest way to use QContiguousCache is to cache as items are requested. When a view requests an item at row N it is also likely to ask for items at rows near to N.

def data(self, QModelIndex index, int role):

    if role != Qt.DisplayRole:
        return QVariant()
    row = index.row()
    if row > m_rows.lastIndex():
        if row - m_rows.lastIndex() > lookAhead:
            cacheRows(row - halfLookAhead, qMin(m_count, row + halfLookAhead))
else:while (row > m_rows.lastIndex())
                m_rows.append(fetchRow(m_rows.lastIndex() + 1))
     elif row < m_rows.firstIndex():
        if m_rows.firstIndex() - row > lookAhead:
            cacheRows(qMax(0, row - halfLookAhead), row + halfLookAhead)
else:while (row < m_rows.firstIndex())
                m_rows.prepend(fetchRow(m_rows.firstIndex() - 1))

    return m_rows.at(row)

def cacheRows(self, from, to):

    for i in range(from, to + 1):
        m_rows.insert(i, fetchRow(i))

After getting the row, the class determines if the row is in the bounds of the contiguous cache’s current range. It would have been equally valid to simply have the following code instead.

while (row > m_rows.lastIndex())
    m_rows.append(fetchWord(m_rows.lastIndex()+1);
while (row < m_rows.firstIndex())
    m_rows.prepend(fetchWord(m_rows.firstIndex()-1);

However a list will often jump rows if the scroll bar is used directly, resulting in the code above causing every row between the old and new rows to be fetched.

Using lastIndex() and firstIndex() allows the example to determine what part of the list the cache is currently caching. These values don’t represent the indexes into the cache’s own memory, but rather a virtual infinite array that the cache represents.

By using append() and prepend() the code ensures that items that may be still on the screen are not lost when the requested row has not moved far from the current cache range. insert() can potentially remove more than one item from the cache as QContiguousCache does not allow for gaps. If your cache needs to quickly jump back and forth between rows with significant gaps between them consider using QCache instead.

And that’s it. A perfectly reasonable cache, using minimal memory for a very large list. In this case the accessor for getting the words into the cache generates random information rather than fixed information. This allows you to see how the cache range is kept for a local number of rows when running the example.

def fetchRow(self, int position):

    return QString.number(QRandomGenerator.global().bounded(++position))

It is also worth considering pre-fetching items into the cache outside of the application’s paint routine. This can be done either with a separate thread or using a QTimer to incrementally expand the range of the cache prior to rows being requested out of the current cache range.

Example project @ code.qt.io