i’m trying to develop an application to manage data connection with libqmi 1.28.2.
When sending qmicli commands always appears this warnings outputs using qmi_wwan_q v1.1 :
-Warning ** [/dev/cdc-wdm0] couldn’t detect transport type of port: unexpected driver detected: qmi_wwan_q
-Warning ** [/dev/cdc-wdm0] requested QMI mode but unexpected transport type found*
do you have any experience with this kind of output, i want to know if it is a critical for the correct manage of data connection.
Do you have some documentation how to handle data connection with libqmi using qmicli?
Please make sure you really need the custom quectel-provided qmi_wwan_q driver, because this is not always the case. For new enough kernels, your modems may already be supported by the default qmi_wwan driver. And so, if using the original qmi_wwan instead of the quectel-provided qmi_wwan_q, you should not see any warning.
Those warnings are harmless, btw, they won’t break any logic in the modem management operation.
the information contained on https://www.freedesktop.org/wiki/Software/libqmi/ are library related and not QMI protocol related. To quick develop an application based on qmicli without an QMI protocol documentation is quite difficult, so guide on how to configure and bring up a data connection will help a lot.
Quectel’s qmi_wwan_q driver should be completely equivalent to the upstream qmi_wwan driver. It’s probably just the same driver, the same exact sources, but with just additional device IDs to support Quectel specific modules, before those same IDs are added in the upstream qmi_wwan driver. That’s why I said you should first check whether the upstream qmi_wwan driver is enough or not for your needs, it may very well be, and it would remove the complication of needing to use a different driver.
The link from techship.com explaining how to use qmicli is what you need, even if using the qmi_wwan_q driver from Quectel.
The qmi_wwan_q driver is equivalent to Qualcomm’s qmi_wwan but in addition, for their modules, simplifies the things, mostly for the support of Multi PDN.
You’r right, the best way to proceed was to implement first qmi_wwan, and based on necessity patch it, like Quetel done with qmi_wwan_q.
How is the qmi_wwan_q driver different to the qmi_wwan driver w.r.t. multi PDN support? The upstream qmi_wwan driver allows setting up multiple virtual network interfaces already, either via the built-in add_mux/del_mux sysfs operations, or otherwise configuring the driver to passthrough so that the virtual network interfaces can be managed with rmnet.
Not sure I follow this; the WDA service setup should be completely unrelated to what driver is in use. The WDA service setup and e.g. data format configuration is done from userspace, e.g. using libqmi/qmicli. How is the WDA service setup different when using the qmi_wwan_q driver?
Quectel developers work was involved i think to automate this process. The qmi_wwan_q driver handles netdev calls to bring up the desired Virtual Network Cards. The Quectel driver gives support at most of kernels version too.
I’m not sure about this but looking code on qmi_wwan_q driver, on bind are done some sets based on inserted module, that make unnecessary some setup on WDA, the set of datagrams in download parameters for example.
Doing some test i tried to insert different values of dl-max-datagrams, but i had the same result WDA, simply it’s fixed in my case, that why i’m saing that WDA is implicitly related to qmi driver that can setup parameters based on mounted module.
That is already supported in the upstream qmi_wwan when using rmnet. It is true though that it’s quite a recent addition.
Ok, fair enough
That’s because the WDA values are the values configured inside the modem firmware, while the kernel driver may or may not maintain related values; ideally both things should be in sync. That is why, when using the upstream qmi_wwan and add_mux/del_mux, the MTU of the master interface should be configured to be equal to the WDA “download max size” value, as explained by the developer here http://paldan.altervista.org/linux-qmap-qmi_wwan-multiple-pdn-setup
you wont believe how much suffering I would have saved myself if I had stepped over this post before digging deep into the very same problem… or almost the same…
I’ve a RG500-EA , and qmi_wwan_q takes it as an RMNET interface and sets it to rawip mode (i think) , BUT happens that quectel-CM can speak to the interface but udchdp does never get IP and get stuck there forever.
However, IF I use qmi_wwan , quectel-CM obtains IP after rawip mode is set (echo Y > /sys…)
Would it have any advantages on throughput to use qmi_wwan_q driver instead of stock qmi_wwan?
I see that Quectel’s driver sets rx_urb_size = 31744 for RG500, is that needed to be set while using stock qmi_wwan?
the usage of qmi_wwan intead of qmi_wwan_q depends on your application. ‘qmi_wwan_n’ was developed to easy bring up on usage an Quectel module. The best practice is to use the geric driver, ‘qmi_wwan’ and do related patches.
Related on throughput, you are able to set the fix up parameters to ‘qmi_wwan’ based on module too. Throughput depends by both, pipe size and data flowing.
I’m not sure for this, be carefull when setting the urbs size, this can cause your systems crash.