2022-09-30

Authzforce condition evaluation of matchAny in multi-valued string

I'm looking for a way to define a condition in a policy rule, so that when we pass a multiple string value in our certificate and try to authenticate authzforce against that rule, assuming the string value in the condition is equal to one of the string values we passed in the certificate, I want the rule to evaluate to 'true'. For example if the attribute value of the condition is "DNS:google.com" and the multiple value string we receive from the certificate are: ["DNS:google.nl" ,"DNS:google.com"], I would expect to get the rule evaluated to 'true' as one of those values are equal to the value of the condition ("DNS:google.com").

I tried to achieve this defining a rule with this condition:

<Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-is-in">
  <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">DNS:google.com</AttributeValue>
  <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:certificate" AttributeId="urn:oasis:names:tc:xacml:1.0:certificate-category:subject-alternative-name" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Apply>

I used the 'string-is-in' XACML operator in the condition, however the rule evaluates to false. The attributes I send via the certificate using the crypto library look like this when they reach the PDP:

  <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:certificate">
    <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:certificate-category:subject-alternative-name" IncludeInResult="false">
      <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">DNS:google.com</AttributeValue>
    </Attribute>
  </Attributes>
  <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:certificate">
    <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:certificate-category:subject-alternative-name" IncludeInResult="false">
      <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string"> DNS:google.nl</AttributeValue>
    </Attribute>
  </Attributes>

Right now, the way I defined the condition rule and the the way I sent the multi-string in the certificate, I manage to get the rulet to evaluate to 'true' only if ALL values of the certificate are equal or contain the string value of the condition as a substring. So its more of a "matchAll". That is not what I want - I would like the rule to be evaluated to 'true' if we even have one string on the multi-string in the certificate equal to the string in the condition - meaning a an i'm looking to implement a "matchAny" approach rather than the "matchAll" approach I have here.

Could you please advise me why the rule evaluates to 'false' with my current implementation and how to correct it, using perhaps a different XACML operator? Pasting the policy below as well as in the comments.

<PolicySet PolicySetId="root" Version="0.1.2" PolicyCombiningAlgId="urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-unless-permit"
    xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
    <Target/>
    <Policy PolicyId="dbfcb643-cb39-4560-9c11-95112df970d0" Version="0.1.0" RuleCombiningAlgId="urn:oasis:names:tc:xacml:3.0:rule-combining-algorithm:deny-unless-permit" MaxDelegationDepth="10">
        <Description>Policy for EAP authentications by SAN dns domains</Description>
        <Target/>
        <Rule RuleId="86ef9adb-2acb-43a1-aac6-b01fdeab9a44" Effect="Permit">
            <Description>Permit by certificate's SAN dns domain</Description>
            <Condition>
                <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:and">
                    <Description>new condition</Description>
                    <Apply FunctionId="urn:oasis:names:tc:xacml:3.0:function:any-of">
                        <Function FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-equal"/>
                        <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">EAP</AttributeValue>
                        <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:environment" AttributeId="urn:oasis:names:tc:xacml:1.0:environment:radius-auth-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
                    </Apply>
                    <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-is-in">
                        <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">DNS:google.com</AttributeValue>
                        <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:certificate" AttributeId="urn:oasis:names:tc:xacml:1.0:certificate-category:subject-alternative-name" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
                    </Apply>
                </Apply>
            </Condition>
            <AdviceExpressions>
                <AdviceExpression AdviceId="authorization-result" AppliesTo="Permit">
                    <AttributeAssignmentExpression AttributeId="profile-id">
                        <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">bbfc5e21-0e9f-40a6-a5c6-fedd921bff2c</AttributeValue>
                    </AttributeAssignmentExpression>
                </AdviceExpression>
            </AdviceExpressions>
        </Rule>
    </Policy>


REGEX expression for finding ship to address of different countries

I am trying to extract the shipping address from a commercial invoice.

One commercial invoice has ship to destination as Singapore. The other invoice has ship to destination as Hong Kong

How do I write a regex to extract the destination address, which ends either with Singapore or Hong Kong ?

I wrote a regex to extract the shipping address from a Commercial Invoice. see below:

shipto = re.findall("Shipped To/FRT Forwarder\n[a-zA-Z0-9\s\#\-\,]*SINGAPORE", text). 

My problem is the shipping address could be SINGAPORE or HONG KONG or another location. How can I make the regex more generic?

for example: my shipping address could be XXXX Singapore

or

YYYY Hong Kong

How do I implement a "either OR" logic in REGEX in the address extraction ?



Why do I get MIME type error on Django/React app when loading css/js from Aws?

So I deployed a Django-Rest/React app on Heroku where I serve my static and media files on AWS S3 buckets. After pushing to Heroku and accessing the API URL or admin URL everything works fine but when I try to access my React URLs I get a MIME type error.

On the network tab of developer tools, I get status 301 on my JS and CSS file.

And in the console I get:

Refused to apply style from 'https://app.herokuapp.com/static/css/main.9d3ee958.css/' because its MIME type 
('text/html') is not a supported stylesheet MIME type, and strict MIME checking is 
enabled.
app.herokuapp.com/:1 Refused to apply style from 'https://app.herokuapp.com/static/css/main.9d3ee958.css/' because its MIME type 
('text/html') is not a supported stylesheet MIME type, and strict MIME checking is 
enabled.
app.herokuapp.com/:1 Refused to execute script from 'https://app.herokuapp.com/static/js/main.3b833115.js/' because its MIME type 
('text/html') is not executable, and strict MIME type checking is enabled.

Even though the URL above is correct and I do have those files in my bucket.

Here are my production settings:

from decouple import config
import django_heroku
import dj_database_url
from .base import *


 SECRET_KEY = config('SECRET_KEY')
 DEBUG = False

 ALLOWED_HOSTS = ['*']

 ROOT_URLCONF = 'portfolio.urls_prod'

 TEMPLATES = [
     {
         'BACKEND': 'django.template.backends.django.DjangoTemplates',
         'DIRS': [
             os.path.join(BASE_DIR, 'frontend/build')
         ],
         'APP_DIRS': True,
         'OPTIONS': {
             'context_processors': [
                 'django.template.context_processors.debug',
                 'django.template.context_processors.request',
                 'django.contrib.auth.context_processors.auth',
                 'django.contrib.messages.context_processors.messages',
             ],
         },
     },
 ]

 # DATABASE
 DATABASES = {}
 DATABASES['default'] = dj_database_url.config(conn_max_age=600)

 # HEROKU
 django_heroku.settings(locals())

 # AWS S3 SETTINGS
 AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY_ID')
 AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_ACCESS_KEY')
 AWS_STORAGE_BUCKET_NAME = config('AWS_STORAGE_BUCKET_NAME')
 AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
 AWS_DEFAULT_ACL = 'public-read'

 AWS_S3_OBJECT_PARAMETERS = {
     'CacheControl': 'max-age=86400',
 }
 AWS_HEADERS = {
     'Access-Control-Allow-Origin': '*',
 }
 AWS_QUERYSTRING_AUTH = False

 # AWS STATIC SETTINGS
 AWS_LOCATION = 'static'
 STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
 STATICFILES_STORAGE = 'portfolio.storage_backend.StaticStorage'

 # AWS MEDIA SETTINGS
 DEFAULT_FILE_STORAGE = 'portfolio.storage_backend.MediaStorage'
 MEDIA_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, 'media')


 STATICFILES_DIRS = [
     os.path.join(BASE_DIR, 'frontend/build/static'),
 ]

 # HEROKU LOGGING
 DEBUG_PROPAGATE_EXCEPTIONS = True

 LOGGING = {
     'version': 1,
     'disable_existing_loggers': False,
     'formatters': {
         'verbose': {
             'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
             'datefmt' : "%d/%b/%Y %H:%M:%S"
         },
         'simple': {
             'format': '%(levelname)s %(message)s'
         },
     },
     'handlers': {
         'console': {
             'level': 'DEBUG',
             'class': 'logging.StreamHandler',
         },
     },
     'loggers': {
         'MYAPP': {
             'handlers': ['console'],
             'level': 'DEBUG',
         },
     }
 }


 # HTTPS SETTING
 SESSION_COOKIE_SECURE = True
 CSRF_COOKIE_SECURE = True
 SECURE_SSL_REDIRECT = True


 # HSTS SETTINGS
 SECURE_HSTS_SECONDS = 31536000 # 1 year
 SECURE_HSTS_PRELOAD = True
 SECURE_HSTS_INCLUDE_SUBDOMAINS = True


 options = DATABASES['default'].get('OPTIONS', {})
 options.pop('sslmode', None)

This is my storage_backend.py code:

from storages.backends.s3boto3 import S3Boto3Storage

class MediaStorage(S3Boto3Storage):
    location = 'media'
    file_overwrite = False

class StaticStorage(S3Boto3Storage):
    location = 'static'
    default_acl = 'public-read'

This is my url_prod.py code:

from django.contrib import admin
from django.urls import path, re_path, include
from django.views.generic import TemplateView
from django.conf import settings
from django.conf.urls.static import static

urlpatterns = [
  path("admin/", admin.site.urls),
  path('api/', include('api.urls')),

  re_path(r'^(?P<path>.*)/$', TemplateView.as_view(template_name='index.html')),
  path('', TemplateView.as_view(template_name='index.html')),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

This is my bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucket-name/*"
        }
    ]
}

EDIT:

I checked the network tab in developer tools and realized that my staticfiles from admin are being served from my was bucket:

Request URL: 
https://bucket-name.s3.amazonaws.com/static/admin/css/base.css
Request Method: GET
Status Code: 200 OK (from disk cache)
Remote Address: 52.95.143.47:441
Referrer Policy: same-origin

But the static files for my React views are not:

Request URL: 
https://app.herokuapp.com/static/css/main.9d3ee958.css
Request Method: GET
Status Code: 301 Moved Permanently
Remote Address: 54.224.34.30:441
Referrer Policy: same-origin


my application deploys on netlify when i build it locally but it doesnt work when i deploy it from github

Okay so i recently finished my portfolio and i wanted to host it on netlify through github but it failed.
Their docs and build log are so confusing that i do not know how to trouble shoot the problem. However, when i ran
npm run build
locally on my system and then i dragged it onto netlify, it worked.
I'm not saying i cant just keep doing this for all my websites but i would like to know any potential reason as to why it didn't work when i tried it from github.

For context, the current node version is 16 and the one i am using on my system is version 14. I'm not sure whether that is enough to affect the build from github but i just thought i would mention it. if However, that is a reason as to why it isn't deploying from github then how do i fix it?



How to release Java ReentrantLock after sometime no matter what

My objective is to avoid thread deadlock or starvation. I have the following sample code for using ReentranLocks:

 class X {
   private final ReentrantLock lock = new ReentrantLock();
   // ...

   public void m1() { 
     lock.lock();  // block until condition holds
     try {
       // ... method body
       // ... start doing the calculations here ...
     } finally {
       //Do not release the lock here, instead, release it in m2()
     }
   }
   public void m2() { 
     try {
       // ... method body
       // ... continue doing the calculations here
     } finally {
       lock.unlock()
     }
   }

 }

I know I can use tryLock() with a timeout, but I am thinking also to ensure it will be unlocked no matter what as the lock will start in m1() and will be unlocked in m2(). How to ensure it will be unlocked say after 3 seconds no matter what, as soon as I start the lock in m1()?

For the above to be successful, ie. without sending unlock request after 3 seconds, the caller or the user of the JavaBean must ensure calling m1() and then m2() immediately. This is a restriction I want to avoid, and if the programmer forgets to do that, it might result in spending a long time troubleshooting this issue, which is, why the system is getting in a deadlock.

Thoughts:

I am thinking to use Scheduled Tasks and Timers, will that work?



2022-09-29

Fetching multiple values serial wise, if data is repeated

i need help in understanding on which lookup function can i use, that fetches values from another sheet in serial order. In the attached image you would see that a product has multiple order id's. I want the lookup to fetch values in serial number(ignoring the value fetched earlier) when product name is entered twice in the lookup sheet. Is there a vba function or a formula for such search. I am a novice and help will be appriciated.

Original Sheet
Original Sheet

Lookup sheet
Lookup sheet



DryIoc: register decorator with two interfaces, retrieve the decorator instance when resolving the other interface

Here is a somewhat simplified description of the problem I'm trying to solve: I have a service (e.g. a repository) implementing an interface that I need to inject as a dependency:

public class Service : IService { ... }

I'd like to add a decorator, for example one that add caching that also implements another interface:

public class CachingService: IService, IFlushable
{
  public CachingService(IService decoratee) { ... }

  public void Flush() { ... }
}

public interface IFlushable
{
  public void Flush();
}

Normally, I'd just register the CachingService as an implementation of IService as as decorator, using Setup.Decorator or Setup.DecoratorWith. But in this case I have an extra requirement related to the IFlushable interface. There will be several different services with their own decorators, all of them implementing the both the decorated service interface and IFlushable. I need to inject all the IFlushable decorators as a dependency to be able to flush all the caches on request.

public class CacheHandler
{
  public CacheHandler(IFlushable[] cache) { ... }

  public void FlushAllCaches() { ... }
}

The problem is that this CacheHandler must receive the same decorator instances that were applied to the Service classes.

I have tried several solutions using RegisterMapping and tried to scope the resolution of the caches to their decorated services, but I could not make it work. Either the I receive an error that the container cannot resolve the decorators (which makes sense) or I need to register the decorators themselves, but in the latter case the CacheHandler will receive a new set of IFlushable instances.

The more I think about the more I feel that what I'm trying to achieve here might not even by possible using a DI container. I mean maybe I'm solve this the wrong way. My question is if my approach is valid and/or how can I get all the applied IFLushable decorator instances as a dependency.



restoring the exception environment

Can anyone explain the concept of restoring the exception environment simply and smoothly.

It is said that when we use the exception handler in the try...endtry statement،When the program reaches the endtry, it restores the exception environment, but if it suddenly encounters a break command, for example, this recovery does not take place. And the program even after exiting the try.......endtry command thinks that it is in the previous exception environment, and if another error occurs, it returns to the previous try......endtry command.

Like the following code snippet:

program testBadInput3;
#include( "stdlib.hhf" )
static
input: int32; 
begin testBadInput3;
 // This forever loop repeats 
//until the user enters
// a good integer and the 
break 
//statement below
// exits the loop.

forever

try
stdout.put( "Enter an integer 
value: " );
stdin.get( input );
stdout.put( "The first input 
value was: ", input, nl );

break;

exception( ex.ValueOutOfRange 
)

stdout.put( "The value was too 
large, re-enter." nl );

exception( ex.ConversionError 
)

stdout.put( "The input 
contained illegal characters, 
re-enter." nl );

endtry;

endfor;

// Note that the following 
code //is outside the loop and 
there
// is no try..endtry statement 
//protecting this code.

stdout.put( "Enter another 
number: " );
stdin.get( input );
stdout.put( "The new number 
is: 
", input, nl );
end testBadInput3;


How to add up more data in an existing plotly graph?

I have successfully plotted the below data using plotly from an Excel file.

enter image description here

Here is my code:

file_loc1 = "AgeGroupData_time_to_treatment.xlsx"

df_centroid_CoordNew = pd.read_excel(file_loc1, index_col=None, na_values=['NA'], usecols="C:D,AB")
df_centroid_CoordNew.head()

df_centroid_Coord['Ambulance_Treatment_Time'] = df_centroid_Coord ['Base_TT']

fig = px.scatter(df_centroid_Coord, x="x", y="y", 
                 title="Southern Region Centroids", 
                 color='Ambulance_Treatment_Time', 
                 hover_name="KnNamn",
                 hover_data= ['Ambulance_Treatment_Time', "TotPop"],
                 log_x=True, size_max=60, 
                 color_continuous_scale='Reds', range_color=(0.5,2), width=1250, height=1000)

fig.update_traces(marker={'size': 8, 'symbol': 1})
#fig.update_traces(marker={'symbol': 1})
fig.update_layout(paper_bgcolor="LightSteelBlue")

fig.show()

The shapes of the plotted data points are square.

Here is output of my code:

enter image description here

Now, I want to plot more data points in circle or any shapes on the same plotly graph by reading an excel file. Please have a look at the data below.

enter image description here

How I can add up the new data to an existing graph in plotly?

Map data with total population and treatment time (Base_TT):

    ID  KnNamn  x   y   TotPop  Base_TT
1   2   Växjö   14.662290   57.027520   9   1.599971
2   3   Bromölla    14.494072   56.065635   264 1.307165
3   4   Trelleborg  13.219968   55.478675   40  1.411554
4   5   Tomelilla   14.005013   55.721209   6   1.968138
5   6   Halmstad    12.737361   56.710973   386 1.309849
6   7   Alvesta 14.566685   56.748729   47  1.719117
7   8   Laholm  13.241388   56.413591   0   2.000620
8   9   Tingsryd    14.943081   56.542837   16  1.668725
9   10  Sölvesborg  14.574474   56.056953   1147    1.266862
10  11  Halmstad    13.068009   56.635666   38  1.589239
11  12  Tingsryd    14.699642   56.479597   3   1.960050
12  13  Vellinge    13.029769   55.484749   61  1.254957
13  14  Örkelljunga 13.169010   56.232819   12  1.429789
14  15  Svalöv  13.059068   55.853696   26  1.553722
15  16  Sjöbo   13.738205   55.601936   6   1.326429
16  17  Hässleholm  13.729872   56.347672   13  1.709021
17  18  Olofström   14.588037   56.290604   6   1.444833
18  19  Eslöv   13.168712   55.900311   3   1.527547
19  20  Ronneby 15.024222   56.273317   3   1.692005
20  21  Ängelholm   12.910101   56.246689   19  1.090544

Ambulance Data:

ID  Ambulance station name  Longtitude  Latitude
0   1   Älmhult 14.128734   56.547992
1   2   Ängelholm   12.870739   56.242114
2   3   Alvesta 14.549503   56.920740
3   4   Östra Ljungby   13.057450   56.188099
4   5   Broby   14.080958   56.254481
5   6   Bromölla    14.466869   56.072272
6   7   Förslöv 12.814913   56.350098
7   9   Hässleholm  13.778234   56.161536
8   10  Höganäs 12.556995   56.206016
9   11  Hörby   13.643265   55.849811
10  12  Halmstad, Väster    12.819960   56.674306
11  13  Halmstad, Öster 12.882289   56.676871
12  14  Helsingborg 12.738642   56.084708
13  15  Hyltebruk   13.238277   56.993058
14  16  Karlshamn   14.854022   56.186596
15  17  Karlskrona  15.606300   56.183054
16  18  Kristianstad    14.171371   56.031201
17  20  Löddeköpinge    12.995037   55.766946
18  21  Laholm  13.033763   56.498955
19  22  Landskrona  12.867245   55.872659
20  23  Lenhovda    15.283913   57.001953
21  24  Lessebo 15.267357   56.756860
22  25  Ljungby 13.935399   56.835023
23  26  Lund    13.226607   55.695212
24  27  Markaryd    13.591491   56.452057
25  28  Olofström   14.545848   56.272221
26  29  Osby    13.983674   56.384833
27  30  Perstorp    13.388304   56.130752
28  31  Ronneby 15.280554   56.211863
29  32  Sölvesborg  14.570503   56.052113
30  33  Simrishamn  14.338632   55.552765

​


How do you maintain on Python the value of the last row in a column, like on excel?

I have looked around and haven't found an 'elegant' solution. It can't be that it is not doable. What I need is to have a column ('col A') on a dataframe that it is always 0, if the adjacent ('col B') column hits 1, then change the value to 1, and all further rows should be 1 (no matter what else happens on 'col B'), until another column ('col C') hits 1, then 'col A' returns to 0, until this repeats. The data has thousands of rows, and it gets updated regularly. any ideas? I have tried shift, iloc and loops, but can't make it work. the result should look something like this:

[sample data][1]

date col A col B col C
...   0     0     0
...   0     0     0
...   1     1     0
...   1     1     0
...   1     0     1
...   0     0     0
...   0     0     0
...   1     1     0
...   1     1     0
...   1     0     0
...   1     0     0
...   1     1     0
...   1     0     0
...   1     1     0
...   1     0     1
...   0     0     0

This is the base code I have been thinking about, but I can't get it to work:

df['B'] = df['A'].apply(lambda x: 1 if x == 1 else 0)

for i in range(1, len(df)):
    if df.loc[i, 'C'] == 1:
        df.loc[i, 'B'] = 0
    else:
        df.loc[i, 'B'] = df.loc[i-1, 'B']


Cannot Browse to specific Type in Settings designer for WPF/.net Core application

When I've used Settings Designer before, I've been able to browse to find non-standard Types (e.g. uncommon enums etc) to use in my Settings via a "Browse" button at the bottom of the drop down under the "Type" column. I'm developing a WPF desktop application for .net Core and there is no Browse option as pictured below:

enter image description here

I did go into the code behind (Settings.Designer.cs.) and edit the code manually, but on saving, this just reverted to string. I'm guessing this may have something to do with settings also having an element in App.config and I notice it has a "serialiseAs" tag - didn't know what to put here. Exmaple of the code behind settings and App.config:

[global::System.Configuration.UserScopedSettingAttribute()]
[global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
[global::System.Configuration.DefaultSettingValueAttribute("")]
public string UiTheme {
    get {
        return ((string)(this["UiTheme"]));
    }
    set {
        this["UiTheme"] = value;
    }
}
<userSettings>
    <GameBoxer.WPF.Properties.Settings>
        <setting name="UiTheme" serializeAs="String">
            <value />
        </setting>
    </GameBoxer.WPF.Properties.Settings>
</userSettings>

Does anyone know how to bring back the 'Browse'?? Or, how to correctly do it in code?

I'm using Visual Studio 2022 Community

Thanks


UPDATE: So, I learn that this is "By Design" in VS2022 according to MS here. IT still works in VS2019 fine! But they've taken it out of VS2022 and I can't figure how to do it in code. MS, you're one of my faves out the bunch, but sometimes, you're as mad as a box of frogs.



2022-09-28

type guards for optional parameters

I have the following fixture file that i have type guarded below. it has few optional properties

fixture file-

 {
      "profiles": [
        {
          "name": "Laakea",
          "phoneNumber": "2033719225",
          "authGroupName": "Drivers"
        },
        {
          "name": "Lkhagvasuren",
          "phoneNumber": "2033719225",
          "authGroupName": "Drivers"
        },
        {
          "name": "Joaquin",
          "phoneNumber": "2033719225"
        }
      ]
    }

type interface-

 export interface Profile {
      name: string;
      authGroupName?: string;
      phoneNumber?: string;
      email?: string;
    }

type guard function-

export function isValidProfiles(profiles: unknown): profiles is Profile[] {
  if (!Array.isArray(profiles)) {
    return false;
  }
  for (let index = 0; index < profiles.length; index += 1) {
    if (typeof profiles[index].name !== 'string') {
      return false;
    }
    if (profiles[index].email) {
      if (typeof profiles[index].email !== 'string') {
        return false;
      }
    }
    if (profiles[index].phoneNumber) {
      if (typeof profiles[index].phoneNumber !== 'string') {
        return false;
      }
    }
    if (profiles[index].authGroupName) {
      if (typeof profiles[index].authGroupName !== 'string') {
        return false;
      }
    }
  }

  return true;
}

i was wondering if i could write it better instead of all these if statements ?



Little Endian in Instruction

I'm learning about RISC-V instructions in Computer Architecture. What i wonder is, because of little endian, any number in RISC-V's instruction's little digit is on little bit. I know that RISC-V use little endian to express data in memory. but I'm not sure it is same to express number in instructions.

for example, add instruction has that form, [funct7][rs2][rs1][funct3][rd][opcode], MSB is in funct7, LSB is in opcode. and rs1, rs2, rd is some number with 5bits. if [rd] is 0b00001, position of 1 is LSB in rd's 5 bits. this point is my question. is reason that 1's position is LSB, RISC-V use little endian? if that is right, is (0b00001's) position of 1, MSB in big endian?



Regex matching mixed string segments containing operator, string designator, and curly-brace group

I am looking for a C# regex solution to match/capture some small but complex chunks of data. I have thousands of unstructured chunks of data in my database (comes from a third-party data store) that look similar to this:

not BATTCOMPAR{275} and FORKCARRIA{ForkSpreader} and SIDESHIFT{WithSSPassAttachCenterLine} and TILTANGLE{4up_2down} and not AUTOMATSS{true} and not FORKLASGUI{true} and not FORKCAMSYS{true} and OKED{true}

I want to be able to split that up into discrete pieces (regex match/capture) like the following:

not BATTCOMPAR{275} 
and FORKCARRIA{ForkSpreader} 
and SIDESHIFT{WithSSPassAttachCenterLine} 
and TILTANGLE{4up_2down} 
and not AUTOMATSS{true} 
and not FORKLASGUI{true} 
and not FORKCAMSYS{true} 
and OKED{true}

The data will always conform to the following rules:

  • At the end of each chunk of data there will be a string enclosed by curly braces, like this: {275}
  • The "curly brace grouping" will always come at the end of a string beginning with not or and or and not or nothing. The "nothing" is the same as and and will only occur when it's the first chunk in the string. For example, if my and OKED{true} had come at the beginning of the string, the and would have been omitted and OKED{true} would have been prefixed by nothing (empty string). But it's the same as an and.
  • After the operator (and or not or and not or nothing) there will always be a string designator that ends just before the curly brace grouping. Example: BATTCOMPAR
  • It appears that the string designator will always touch the curly brace grouping with no space in between but I'm not 100% sure. The regex should accommodate the scenario in which a space might come between the string designator and the left curly brace.
  • Summary #1 of above points: each chunk will have 3 distinct sub-groups: operator (such as and not), string designator (such as BATTCOMPAR), and curly brace grouping (such as {ForkSpreader}).
  • Summary #2 of above points: each chunk will begin with one of the 3 listed operators, or nothing, and end with a right-curly-brace. It is guaranteed that only 1 left-curly-brace and only 1 right-curly-brace will exist within the entire segment, and they will always be grouped together at the end of the segment. There is no fear of encountering additional/stray curly braces in other parts of the segment.

I have experimented with a few different regex constructions:

Match curly brace groupings:

Regex regex = new Regex(@"{(.*?)}");
return regex.Matches(str);

The above almost works, but gets only the curly brace groupings and not the operator and string designator that goes with it.

Capture chunks based on string prefix, trying to match operator strings:

var capturedWords = new List<string>();
string regex = $@"(?<!\w){prefix}\w+";

foreach ( Match match in Regex.Matches(haystack, regex) ) {
    capturedWords.Add(match.Value);
}

return capturedWords;

The above partially works, but gets only the operators, and not the entire chunk I need: (operator + string designator + curly brace grouping)

Thanks in advance for any help.



Use at inference a multi-task learning model shared in Huggingface hub

I train with success a multi-task bert model. My Bert model works by having a shared BERT-style encoder transformer, and two different task heads for each task. The two heads are a binary classification head (num_label =2) and a sentiment classification head (num_label = 5)

I try to share it on the hub and reload it after for inference. But i failed.

Here is the code :

class SequenceClassificationHead(nn.Module):
    def __init__(self, hidden_size, num_labels, dropout_p=0.1):

        super().__init__()
        self.num_labels = num_labels
        self.dropout = nn.Dropout(dropout_p)
        self.classifier = nn.Linear(hidden_size, num_labels)

        self._init_weights()

    def _init_weights(self):
        self.classifier.weight.data.normal_(mean=0.0, std=0.02)
        if self.classifier.bias is not None:
            self.classifier.bias.data.zero_()

    def forward(self, sequence_output, pooled_output, labels=None, **kwargs):
        pooled_output = self.dropout(pooled_output)
        logits = self.classifier(pooled_output)

        loss = None
        if labels is not None:
            
            loss_fct = nn.CrossEntropyLoss()
            loss = loss_fct(
                logits.view(-1, self.num_labels), labels.long().view(-1)
            )
class MultiTaskModel(BertPreTrainedModel):
    def __init__(self, checkpoint, tasks: List):
        super().__init__(PretrainedConfig())

        self.encoder = BertModel.from_pretrained(checkpoint)

        self.output_heads = nn.ModuleDict()
        for task in tasks:
            decoder = self._create_output_head(self.encoder.config.hidden_size, task)
            # ModuleDict requires keys to be strings
            self.output_heads[str(task.id)] = decoder

    @staticmethod
    def _create_output_head(encoder_hidden_size: int, task):
        if task.type == "seq_classification":
            return SequenceClassificationHead(encoder_hidden_size, task.num_labels)
        else:
            raise NotImplementedError()

    def forward(
        self,
        input_ids=None,
        attention_mask=None,
        token_type_ids=None,
        position_ids=None,
        head_mask=None,
        inputs_embeds=None,
        labels=None,
        task_ids=None,
        **kwargs,

        ):

        outputs = self.encoder(
            input_ids=input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
        )

        sequence_output, pooled_output = outputs[:2]
        unique_task_ids_list = torch.unique(task_ids).tolist()

        loss_list = []
        logits = None
        for unique_task_id in unique_task_ids_list:

            task_id_filter = task_ids == unique_task_id
            logits, task_loss = self.output_heads[str(unique_task_id)].forward(
                sequence_output[task_id_filter],
                pooled_output[task_id_filter],
                labels=None if labels is None else labels[task_id_filter],
                attention_mask=attention_mask[task_id_filter],
            )

I train it with the trainer API and share it with the Trainer API. It works but when i want to use it for inference and load from the hub i have this message :

loading file vocab.txt from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/vocab.txt
loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/special_tokens_map.json
loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/tokenizer_config.json
loading configuration file config.json from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/config.json
Model config BertConfig {
“architectures”: [
“MultiTaskModel”
],
“attention_probs_dropout_prob”: 0.1,
“classifier_dropout”: null,
“hidden_act”: “gelu”,
“hidden_dropout_prob”: 0.1,
“hidden_size”: 768,
“initializer_range”: 0.02,
“intermediate_size”: 3072,
“layer_norm_eps”: 1e-12,
“max_position_embeddings”: 512,
“model_type”: “bert”,
“num_attention_heads”: 12,
“num_hidden_layers”: 12,
“pad_token_id”: 0,
“position_embedding_type”: “absolute”,
“torch_dtype”: “float32”,
“transformers_version”: “4.22.1”,
“type_vocab_size”: 2,
“use_cache”: true,
“vocab_size”: 30522
}

loading weights file pytorch_model.bin from cache at /root/.cache/huggingface/hub/models–HCKLab–BiBert-MultiTask/snapshots/f3523728d3e144c0b7d262f6ff924cc174bc0d03/pytorch_model.bin
Some weights of the model checkpoint at HCKLab/BiBert-MultiTask were not used when initializing BertModel: [‘encoder.encoder.layer.4.output.LayerNorm.bias’, ‘encoder.encoder.layer.3.attention.self.key.weight’, ‘encoder.encoder.layer.1.attention.self.query.bias’, ‘encoder.encoder.layer.4.attention.self.query.bias’, ‘encoder.encoder.layer.5.output.LayerNorm.bias’, ‘encoder.encoder.layer.4.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.11.attention.output.dense.bias’, ‘encoder.encoder.layer.2.attention.self.query.bias’, ‘encoder.pooler.dense.weight’, ‘encoder.encoder.layer.6.intermediate.dense.weight’, ‘encoder.encoder.layer.1.attention.self.key.bias’, ‘encoder.encoder.layer.7.attention.output.dense.weight’, ‘encoder.encoder.layer.9.attention.output.LayerNorm.weight’, ‘encoder.embeddings.LayerNorm.bias’, ‘encoder.encoder.layer.8.intermediate.dense.bias’, ‘encoder.encoder.layer.4.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.4.attention.self.value.weight’, ‘encoder.encoder.layer.5.output.dense.bias’, ‘encoder.encoder.layer.2.output.LayerNorm.weight’, ‘encoder.encoder.layer.5.output.LayerNorm.weight’, ‘encoder.encoder.layer.6.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.7.output.dense.weight’, ‘encoder.encoder.layer.7.intermediate.dense.bias’, ‘encoder.encoder.layer.9.output.dense.bias’, ‘encoder.encoder.layer.4.output.dense.weight’, ‘encoder.encoder.layer.10.attention.self.key.weight’, ‘encoder.encoder.layer.11.output.dense.bias’, ‘encoder.embeddings.position_embeddings.weight’, ‘encoder.encoder.layer.1.attention.self.value.bias’, ‘encoder.encoder.layer.6.attention.self.value.weight’, ‘encoder.encoder.layer.10.attention.self.value.bias’, ‘encoder.encoder.layer.6.attention.output.dense.bias’, ‘encoder.encoder.layer.5.attention.self.query.weight’, ‘encoder.encoder.layer.11.attention.output.dense.weight’, ‘encoder.encoder.layer.0.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.0.attention.self.key.weight’, ‘encoder.encoder.layer.11.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.1.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.3.output.LayerNorm.bias’, ‘encoder.encoder.layer.0.intermediate.dense.weight’, ‘encoder.encoder.layer.8.attention.self.query.weight’, ‘encoder.encoder.layer.10.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.3.attention.output.dense.bias’, ‘encoder.encoder.layer.3.output.LayerNorm.weight’, ‘encoder.encoder.layer.10.attention.self.key.bias’, ‘encoder.encoder.layer.1.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.5.attention.self.key.weight’, ‘encoder.encoder.layer.7.attention.self.key.weight’, ‘encoder.encoder.layer.9.attention.self.key.bias’, ‘encoder.encoder.layer.6.attention.self.query.bias’, ‘encoder.encoder.layer.9.output.LayerNorm.bias’, ‘encoder.encoder.layer.10.attention.output.dense.weight’, ‘encoder.encoder.layer.1.output.LayerNorm.bias’, ‘encoder.encoder.layer.0.output.dense.bias’, ‘encoder.encoder.layer.11.attention.self.value.weight’, ‘encoder.encoder.layer.6.attention.self.query.weight’, ‘encoder.encoder.layer.2.attention.output.LayerNorm.bias’, ‘output_heads.0.classifier.bias’, ‘encoder.encoder.layer.10.output.dense.weight’, ‘encoder.encoder.layer.5.attention.self.query.bias’, ‘encoder.encoder.layer.8.attention.output.dense.weight’, ‘encoder.encoder.layer.8.intermediate.dense.weight’, ‘encoder.encoder.layer.1.intermediate.dense.weight’, ‘encoder.encoder.layer.7.attention.self.query.bias’, ‘encoder.embeddings.token_type_embeddings.weight’, ‘encoder.encoder.layer.5.intermediate.dense.weight’, ‘encoder.encoder.layer.4.attention.output.dense.weight’, ‘encoder.encoder.layer.9.intermediate.dense.weight’, ‘encoder.encoder.layer.7.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.10.attention.output.dense.bias’, ‘encoder.encoder.layer.3.output.dense.weight’, ‘encoder.encoder.layer.11.attention.self.query.weight’, ‘encoder.encoder.layer.6.attention.self.key.bias’, ‘encoder.encoder.layer.8.output.dense.weight’, ‘encoder.encoder.layer.0.attention.self.value.bias’, ‘encoder.encoder.layer.0.attention.self.query.weight’, ‘encoder.pooler.dense.bias’, ‘encoder.encoder.layer.8.output.LayerNorm.bias’, ‘encoder.encoder.layer.6.attention.output.dense.weight’, ‘encoder.encoder.layer.7.attention.self.value.bias’, ‘encoder.embeddings.position_ids’, ‘encoder.encoder.layer.10.attention.self.value.weight’, ‘encoder.encoder.layer.10.output.dense.bias’, ‘encoder.encoder.layer.7.attention.output.LayerNorm.bias’, ‘output_heads.0.classifier.weight’, ‘encoder.encoder.layer.8.output.LayerNorm.weight’, ‘encoder.encoder.layer.6.attention.self.key.weight’, ‘encoder.encoder.layer.0.intermediate.dense.bias’, ‘encoder.encoder.layer.2.attention.output.LayerNorm.weight’, ‘encoder.embeddings.word_embeddings.weight’, ‘encoder.encoder.layer.4.attention.self.key.bias’, ‘encoder.encoder.layer.6.output.dense.bias’, ‘encoder.encoder.layer.2.attention.self.value.bias’, ‘encoder.encoder.layer.5.attention.self.key.bias’, ‘encoder.encoder.layer.2.attention.self.key.weight’, ‘encoder.encoder.layer.5.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.11.attention.self.key.bias’, ‘encoder.encoder.layer.1.attention.self.key.weight’, ‘encoder.encoder.layer.0.output.LayerNorm.bias’, ‘encoder.encoder.layer.2.attention.self.value.weight’, ‘encoder.encoder.layer.2.intermediate.dense.weight’, ‘encoder.encoder.layer.4.attention.self.query.weight’, ‘encoder.encoder.layer.5.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.5.attention.output.dense.weight’, ‘encoder.encoder.layer.9.intermediate.dense.bias’, ‘encoder.encoder.layer.3.attention.self.value.weight’, ‘encoder.encoder.layer.11.output.LayerNorm.weight’, ‘encoder.encoder.layer.6.attention.self.value.bias’, ‘encoder.encoder.layer.7.attention.output.dense.bias’, ‘encoder.encoder.layer.7.attention.self.query.weight’, ‘encoder.encoder.layer.3.intermediate.dense.bias’, ‘encoder.encoder.layer.11.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.1.attention.output.dense.bias’, ‘encoder.encoder.layer.11.attention.self.query.bias’, ‘encoder.encoder.layer.5.attention.output.dense.bias’, ‘encoder.encoder.layer.8.attention.self.value.bias’, ‘encoder.encoder.layer.7.output.LayerNorm.weight’, ‘output_heads.1.classifier.weight’, ‘encoder.encoder.layer.2.intermediate.dense.bias’, ‘encoder.encoder.layer.10.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.9.attention.self.value.bias’, ‘encoder.encoder.layer.10.output.LayerNorm.weight’, ‘encoder.encoder.layer.10.output.LayerNorm.bias’, ‘encoder.encoder.layer.5.attention.self.value.bias’, ‘encoder.encoder.layer.9.attention.self.query.bias’, ‘encoder.encoder.layer.8.attention.self.query.bias’, ‘encoder.encoder.layer.11.output.dense.weight’, ‘output_heads.1.classifier.bias’, ‘encoder.encoder.layer.4.attention.output.dense.bias’, ‘encoder.encoder.layer.2.output.dense.weight’, ‘encoder.encoder.layer.1.output.LayerNorm.weight’, ‘encoder.encoder.layer.2.attention.output.dense.bias’, ‘encoder.encoder.layer.9.output.LayerNorm.weight’, ‘encoder.encoder.layer.2.output.dense.bias’, ‘encoder.encoder.layer.9.attention.output.dense.bias’, ‘encoder.encoder.layer.10.attention.self.query.bias’, ‘encoder.encoder.layer.7.intermediate.dense.weight’, ‘encoder.encoder.layer.0.attention.output.dense.bias’, ‘encoder.encoder.layer.11.attention.self.value.bias’, ‘encoder.encoder.layer.3.intermediate.dense.weight’, ‘encoder.encoder.layer.3.attention.self.query.bias’, ‘encoder.encoder.layer.8.attention.self.value.weight’, ‘encoder.encoder.layer.11.intermediate.dense.bias’, ‘encoder.encoder.layer.5.output.dense.weight’, ‘encoder.encoder.layer.2.output.LayerNorm.bias’, ‘encoder.encoder.layer.10.intermediate.dense.weight’, ‘encoder.encoder.layer.11.intermediate.dense.weight’, ‘encoder.encoder.layer.5.attention.self.value.weight’, ‘encoder.encoder.layer.9.attention.output.dense.weight’, ‘encoder.encoder.layer.2.attention.output.dense.weight’, ‘encoder.encoder.layer.6.output.dense.weight’, ‘encoder.encoder.layer.1.output.dense.bias’, ‘encoder.encoder.layer.3.attention.self.value.bias’, ‘encoder.encoder.layer.3.attention.output.dense.weight’, ‘encoder.encoder.layer.4.intermediate.dense.bias’, ‘encoder.encoder.layer.0.attention.self.value.weight’, ‘encoder.encoder.layer.9.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.7.attention.self.value.weight’, ‘encoder.encoder.layer.10.intermediate.dense.bias’, ‘encoder.encoder.layer.5.intermediate.dense.bias’, ‘encoder.encoder.layer.8.output.dense.bias’, ‘encoder.encoder.layer.3.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.4.output.dense.bias’, ‘encoder.encoder.layer.4.output.LayerNorm.weight’, ‘encoder.encoder.layer.8.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.0.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.4.intermediate.dense.weight’, ‘encoder.encoder.layer.6.output.LayerNorm.weight’, ‘encoder.encoder.layer.9.attention.self.key.weight’, ‘encoder.encoder.layer.3.output.dense.bias’, ‘encoder.encoder.layer.0.attention.output.dense.weight’, ‘encoder.encoder.layer.9.output.dense.weight’, ‘encoder.encoder.layer.0.output.LayerNorm.weight’, ‘encoder.encoder.layer.11.output.LayerNorm.bias’, ‘encoder.encoder.layer.3.attention.self.query.weight’, ‘encoder.encoder.layer.0.attention.self.query.bias’, ‘encoder.encoder.layer.0.attention.self.key.bias’, ‘encoder.encoder.layer.3.attention.self.key.bias’, ‘encoder.encoder.layer.1.attention.output.dense.weight’, ‘encoder.encoder.layer.7.output.dense.bias’, ‘encoder.encoder.layer.9.attention.self.query.weight’, ‘encoder.encoder.layer.8.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.10.attention.self.query.weight’, ‘encoder.encoder.layer.4.attention.self.value.bias’, ‘encoder.encoder.layer.3.attention.output.LayerNorm.bias’, ‘encoder.encoder.layer.8.attention.output.dense.bias’, ‘encoder.encoder.layer.7.attention.self.key.bias’, ‘encoder.encoder.layer.0.output.dense.weight’, ‘encoder.encoder.layer.11.attention.self.key.weight’, ‘encoder.encoder.layer.8.attention.self.key.bias’, ‘encoder.embeddings.LayerNorm.weight’, ‘encoder.encoder.layer.2.attention.self.query.weight’, ‘encoder.encoder.layer.6.output.LayerNorm.bias’, ‘encoder.encoder.layer.7.output.LayerNorm.bias’, ‘encoder.encoder.layer.2.attention.self.key.bias’, ‘encoder.encoder.layer.6.intermediate.dense.bias’, ‘encoder.encoder.layer.6.attention.output.LayerNorm.weight’, ‘encoder.encoder.layer.9.attention.self.value.weight’, ‘encoder.encoder.layer.1.intermediate.dense.bias’, ‘encoder.encoder.layer.1.attention.self.query.weight’, ‘encoder.encoder.layer.4.attention.self.key.weight’, ‘encoder.encoder.layer.1.output.dense.weight’, ‘encoder.encoder.layer.8.attention.self.key.weight’, ‘encoder.encoder.layer.1.attention.self.value.weight’]

    This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
    This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of BertModel were not initialized from the model checkpoint at HCKLab/BiBert-MultiTask and are newly initialized: [‘encoder.layer.0.intermediate.dense.weight’, ‘encoder.layer.7.attention.self.key.weight’, ‘encoder.layer.6.attention.self.query.weight’, ‘encoder.layer.5.output.LayerNorm.bias’, ‘encoder.layer.7.attention.output.LayerNorm.weight’, ‘encoder.layer.6.attention.output.LayerNorm.weight’, ‘encoder.layer.3.output.dense.weight’, ‘encoder.layer.2.attention.output.dense.bias’, ‘encoder.layer.0.attention.output.LayerNorm.weight’, ‘encoder.layer.1.attention.self.key.bias’, ‘encoder.layer.0.attention.output.dense.weight’, ‘encoder.layer.1.attention.self.value.bias’, ‘encoder.layer.4.attention.self.value.weight’, ‘encoder.layer.1.attention.output.dense.bias’, ‘encoder.layer.7.intermediate.dense.bias’, ‘encoder.layer.2.output.LayerNorm.bias’, ‘encoder.layer.8.intermediate.dense.bias’, ‘encoder.layer.0.output.dense.bias’, ‘encoder.layer.10.intermediate.dense.weight’, ‘encoder.layer.5.attention.self.query.bias’, ‘encoder.layer.2.attention.self.query.weight’, ‘encoder.layer.5.attention.self.query.weight’, ‘encoder.layer.0.intermediate.dense.bias’, ‘encoder.layer.8.intermediate.dense.weight’, ‘encoder.layer.10.output.dense.bias’, ‘encoder.layer.0.attention.self.key.weight’, ‘encoder.layer.5.attention.output.dense.bias’, ‘encoder.layer.5.output.LayerNorm.weight’, ‘encoder.layer.7.intermediate.dense.weight’, ‘encoder.layer.8.output.dense.bias’, ‘encoder.layer.9.attention.self.key.bias’, ‘encoder.layer.11.output.dense.weight’, ‘encoder.layer.9.attention.self.key.weight’, ‘embeddings.LayerNorm.bias’, ‘encoder.layer.6.intermediate.dense.weight’, ‘encoder.layer.7.attention.self.query.bias’, ‘encoder.layer.1.intermediate.dense.weight’, ‘encoder.layer.7.attention.self.key.bias’, ‘encoder.layer.11.attention.output.dense.bias’, ‘encoder.layer.4.output.LayerNorm.weight’, ‘encoder.layer.7.attention.output.dense.weight’, ‘encoder.layer.11.attention.output.LayerNorm.bias’, ‘encoder.layer.5.output.dense.bias’, ‘encoder.layer.3.attention.self.query.bias’, ‘encoder.layer.8.attention.self.key.bias’, ‘encoder.layer.11.attention.self.query.bias’, ‘encoder.layer.1.attention.output.LayerNorm.weight’, ‘encoder.layer.4.attention.output.LayerNorm.weight’, ‘pooler.dense.bias’, ‘encoder.layer.3.intermediate.dense.weight’, ‘encoder.layer.10.attention.self.query.bias’, ‘encoder.layer.8.output.LayerNorm.weight’, ‘encoder.layer.7.attention.output.LayerNorm.bias’, ‘encoder.layer.4.output.LayerNorm.bias’, ‘encoder.layer.3.attention.self.query.weight’, ‘encoder.layer.1.output.dense.weight’, ‘encoder.layer.4.output.dense.bias’, ‘encoder.layer.10.attention.self.value.bias’, ‘encoder.layer.4.attention.self.query.weight’, ‘encoder.layer.7.output.dense.weight’, ‘encoder.layer.2.attention.self.query.bias’, ‘encoder.layer.1.intermediate.dense.bias’, ‘encoder.layer.10.output.LayerNorm.weight’, ‘encoder.layer.2.attention.self.value.bias’, ‘encoder.layer.11.attention.self.key.bias’, ‘encoder.layer.4.attention.output.LayerNorm.bias’, ‘encoder.layer.8.attention.output.dense.bias’, ‘encoder.layer.2.attention.self.value.weight’, ‘encoder.layer.6.output.LayerNorm.bias’, ‘encoder.layer.8.attention.self.key.weight’, ‘encoder.layer.0.attention.self.query.weight’, ‘encoder.layer.6.attention.self.query.bias’, ‘encoder.layer.8.attention.self.query.weight’, ‘encoder.layer.4.attention.output.dense.weight’, ‘encoder.layer.6.output.dense.weight’, ‘encoder.layer.11.attention.output.LayerNorm.weight’, ‘encoder.layer.9.attention.output.LayerNorm.weight’, ‘encoder.layer.11.output.dense.bias’, ‘encoder.layer.1.output.LayerNorm.weight’, ‘encoder.layer.1.attention.output.dense.weight’, ‘encoder.layer.6.attention.self.value.bias’, ‘encoder.layer.7.attention.output.dense.bias’, ‘encoder.layer.8.attention.self.value.bias’, ‘encoder.layer.5.attention.self.value.bias’, ‘encoder.layer.3.intermediate.dense.bias’, ‘encoder.layer.11.intermediate.dense.bias’, ‘encoder.layer.9.attention.self.value.bias’, ‘encoder.layer.1.attention.self.key.weight’, ‘encoder.layer.9.attention.self.query.weight’, ‘encoder.layer.9.attention.self.value.weight’, ‘encoder.layer.4.attention.self.key.weight’, ‘embeddings.LayerNorm.weight’, ‘encoder.layer.3.attention.output.LayerNorm.bias’, ‘encoder.layer.2.attention.self.key.weight’, ‘encoder.layer.9.intermediate.dense.weight’, ‘encoder.layer.8.attention.output.LayerNorm.weight’, ‘encoder.layer.5.intermediate.dense.bias’, ‘embeddings.token_type_embeddings.weight’, ‘encoder.layer.7.output.LayerNorm.bias’, ‘encoder.layer.7.attention.self.value.bias’, ‘encoder.layer.9.attention.self.query.bias’, ‘encoder.layer.3.attention.self.key.weight’, ‘encoder.layer.3.attention.output.dense.bias’, ‘encoder.layer.0.output.dense.weight’, ‘encoder.layer.6.attention.self.key.bias’, ‘encoder.layer.4.intermediate.dense.weight’, ‘encoder.layer.8.attention.self.value.weight’, ‘encoder.layer.10.attention.self.key.bias’, ‘encoder.layer.7.attention.self.value.weight’, ‘encoder.layer.11.attention.self.value.weight’, ‘pooler.dense.weight’, ‘encoder.layer.8.attention.self.query.bias’, ‘encoder.layer.0.attention.self.key.bias’, ‘encoder.layer.9.output.dense.weight’, ‘encoder.layer.10.attention.output.LayerNorm.weight’, ‘encoder.layer.9.output.LayerNorm.bias’, ‘encoder.layer.2.intermediate.dense.weight’, ‘encoder.layer.10.attention.self.query.weight’, ‘encoder.layer.11.attention.self.value.bias’, ‘encoder.layer.0.attention.output.dense.bias’, ‘encoder.layer.1.attention.self.value.weight’, ‘encoder.layer.0.output.LayerNorm.bias’, ‘encoder.layer.6.attention.self.key.weight’, ‘encoder.layer.6.attention.output.LayerNorm.bias’, ‘encoder.layer.7.attention.self.query.weight’, ‘encoder.layer.6.attention.output.dense.bias’, ‘encoder.layer.5.attention.self.value.weight’, ‘encoder.layer.3.attention.self.value.weight’, ‘encoder.layer.5.output.dense.weight’, ‘encoder.layer.4.intermediate.dense.bias’, ‘encoder.layer.5.attention.output.LayerNorm.weight’, ‘encoder.layer.1.output.LayerNorm.bias’, ‘encoder.layer.7.output.LayerNorm.weight’, ‘encoder.layer.3.output.LayerNorm.weight’, ‘encoder.layer.5.attention.output.dense.weight’, ‘encoder.layer.11.attention.self.key.weight’, ‘encoder.layer.9.attention.output.dense.bias’, ‘encoder.layer.6.output.dense.bias’, ‘encoder.layer.2.output.dense.weight’, ‘encoder.layer.11.intermediate.dense.weight’, ‘encoder.layer.11.output.LayerNorm.weight’, ‘encoder.layer.1.attention.self.query.bias’, ‘encoder.layer.2.attention.output.dense.weight’, ‘encoder.layer.2.output.LayerNorm.weight’, ‘encoder.layer.0.attention.self.query.bias’, ‘encoder.layer.1.attention.output.LayerNorm.bias’, ‘encoder.layer.9.attention.output.dense.weight’, ‘encoder.layer.10.intermediate.dense.bias’, ‘encoder.layer.9.intermediate.dense.bias’, ‘embeddings.word_embeddings.weight’, ‘encoder.layer.0.attention.output.LayerNorm.bias’, ‘encoder.layer.6.intermediate.dense.bias’, ‘encoder.layer.8.output.LayerNorm.bias’, ‘encoder.layer.4.output.dense.weight’, ‘encoder.layer.10.output.dense.weight’, ‘encoder.layer.9.output.dense.bias’, ‘encoder.layer.10.attention.output.dense.weight’, ‘encoder.layer.6.attention.output.dense.weight’, ‘encoder.layer.4.attention.self.query.bias’, ‘encoder.layer.6.output.LayerNorm.weight’, ‘encoder.layer.11.attention.self.query.weight’, ‘encoder.layer.2.attention.output.LayerNorm.weight’, ‘encoder.layer.1.attention.self.query.weight’, ‘encoder.layer.3.attention.self.key.bias’, ‘encoder.layer.7.output.dense.bias’, ‘encoder.layer.0.output.LayerNorm.weight’, ‘encoder.layer.3.attention.output.LayerNorm.weight’, ‘encoder.layer.5.intermediate.dense.weight’, ‘encoder.layer.6.attention.self.value.weight’, ‘encoder.layer.8.attention.output.dense.weight’, ‘encoder.layer.11.attention.output.dense.weight’, ‘encoder.layer.10.attention.output.LayerNorm.bias’, ‘encoder.layer.3.attention.self.value.bias’, ‘encoder.layer.10.attention.self.key.weight’, ‘encoder.layer.4.attention.output.dense.bias’, ‘encoder.layer.4.attention.self.key.bias’, ‘encoder.layer.5.attention.output.LayerNorm.bias’, ‘encoder.layer.10.output.LayerNorm.bias’, ‘encoder.layer.2.attention.output.LayerNorm.bias’, ‘encoder.layer.0.attention.self.value.bias’, ‘embeddings.position_embeddings.weight’, ‘encoder.layer.2.intermediate.dense.bias’, ‘encoder.layer.9.attention.output.LayerNorm.bias’, ‘encoder.layer.10.attention.output.dense.bias’, ‘encoder.layer.8.output.dense.weight’, ‘encoder.layer.11.output.LayerNorm.bias’, ‘encoder.layer.2.attention.self.key.bias’, ‘encoder.layer.4.attention.self.value.bias’, ‘encoder.layer.5.attention.self.key.weight’, ‘encoder.layer.8.attention.output.LayerNorm.bias’, ‘encoder.layer.9.output.LayerNorm.weight’, ‘encoder.layer.10.attention.self.value.weight’, ‘encoder.layer.1.output.dense.bias’, ‘encoder.layer.3.output.dense.bias’, ‘encoder.layer.3.attention.output.dense.weight’, ‘encoder.layer.2.output.dense.bias’, ‘encoder.layer.3.output.LayerNorm.bias’, ‘encoder.layer.0.attention.self.value.weight’, ‘encoder.layer.5.attention.self.key.bias’]
    You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

To load it from the hub i do :

from transformers import BertModel
checkpoint =“HCKLab/BiBert-MultiTask”
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

model1 = MultiTaskModel(checkpoint, tasks).to(device)

What do i miss and why when i shared the model after training to the hub i dont save the weights?



Maple: Dividing polynomials

I am trying to simplify or approximate the following so that it's not in fraction form: [1]: https://i.stack.imgur.com/HH3wR.png

I've tried simplify, factor, expand... they all stay in fraction form. I found this: https://www.maplesoft.com/support/help/Maple/view.aspx?path=PolynomialTools/Approximate/Divide&cid=185 which I thought would be perfect but it's not working.

Basically, I have

253704*q^(7/2)-475136*q^4+825264*q^(9/2)-1284096*q^5+1938336*q^(11/2)-2973696*q
^6+4437312*q^(13/2)-6107136*q^7+8118024*q^(15/2)-11354112*q^8+15653352*q^(17/2)
-19802112*q^9+24832944*q^(19/2)

as the numerator and

836*q^9+594*q^8-648*q^7-418*q^6+540*q^5-99*q^4-88*q^3+54*q^2-12*q+1

as the denominator. I am trying to get an answer in polynomial form - but instead I get

(1/q^(1/2)+264*q^(1/2)-2048*q+7944*q^(3/2)-24576*q^2+64416*q^(5/2)-135168*q^3+
253704*q^(7/2)-475136*q^4+825264*q^(9/2)-1284096*q^5+1938336*q^(11/2)-2973696*q
^6+4437312*q^(13/2)-6107136*q^7+8118024*q^(15/2)-11354112*q^8+15653352*q^(17/2)
-19802112*q^9+24832944*q^(19/2))/(836*q^9+594*q^8-648*q^7-418*q^6+540*q^5-99*q^
4-88*q^3+54*q^2-12*q+1)

which is just one poly over the other.

Is there a way to do this?

Let me know if you have ideas!



Is there a way to skip a step in Specflow with NUnit?

I have a test case where I included a step that applies to all markets I run this against but one. I would like to skip this step in this scenario.

This is what I am currently doing, but I am wondering if there is a built in function. I have searched and I am not having much luck, thanks.

[Then(@"Verify Yearly AutoOrder was created from enrollment")]
    public void ThenVerifyYearlyAutoOrderWasCreatedFromEnrollment()
    {
        if (!Market.Equals("in"))
        {
            this.srEnrollPage.VerifyYearlyAutoOrderWasCreatedFromEnrollment(this.dataCarriers.orderNumber, this.dataCarriers.userEmail);
        }
        else
        {
            return; // India does not have yearly autoOrders as of now.
        }
    }


bad_function_call thrown and segmentation fault caused when passing avx variables to std::function

This problem is found when writing some code related to computer graphics, a simplified version of the code is shown below:

#include <bits/stdc++.h>

#define __AVX__ 1
#define __AVX2__ 1
#pragma GCC target("avx,avx2,popcnt,tune=native")
#include <immintrin.h>

namespace with_avx {
class vec {
   public:
    vec(double x = 0, double y = 0, double z = 0, double t = 0) {
        vec_data = _mm256_set_pd(t, z, y, x);
    }
    __m256d vec_data;
};
}  // namespace with_avx

namespace without_avx {
class vec {
   public:
    vec(double x = 0, double y = 0, double z = 0, double t = 0) {
        vec_data[0] = x, vec_data[1] = y, vec_data[2] = z, vec_data[3] = t;
    }
    double vec_data[4];
};
}  // namespace without_avx

#ifdef USE_AVX
using namespace with_avx;
#else
using namespace without_avx;
#endif

vec same(vec x) { return x; }
std::function<vec(vec)> stdfunc = same;

int main() { 
    vec rand_vec(rand(), rand(), rand());
    vec ret = stdfunc(rand_vec);
    std::cout<<(double)ret.vec_data[0];
}

If I compile the code with the flag USE_AVX like the following:

 g++-12 stdfunction_test.cpp -o ../build/unit_test -D USE_AVX -g

g++ will output some warnings:

In file included from /usr/include/c++/12/functional:59,
                 from /usr/include/x86_64-linux-gnu/c++/12/bits/stdc++.h:71,
                 from stdfunction_test.cpp:2:
/usr/include/c++/12/bits/std_function.h: In member function ‘_Res std::function<_Res(_ArgTypes ...)>::operator()(_ArgTypes ...) const [with _Res = with_avx::vec; _ArgTypes = {with_avx::vec}]’:
/usr/include/c++/12/bits/std_function.h:587:7: note: the ABI for passing parameters with 32-byte alignment has changed in GCC 4.6
  587 |       operator()(_ArgTypes... __args) const
      |       ^~~~~~~~

Then if I run the code, sometimes segmentation fault is caused with the following output:

[1]    12710 segmentation fault  ../build/unit_test

Sometimes, bad_function_call is thrown with the following output:

terminate called after throwing an instance of 'std::bad_function_call'
  what():  bad_function_call
[1]    12678 IOT instruction  ../build/unit_test

Both of these two errors are made when this line is executed:

vec ret = stdfunc(rand_vec);

I then used gdb for backtrace:

(gdb) bt
#0  0x00007ffff7e35521 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007ffff7e2c6f4 in std::__throw_bad_function_call() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#2  0x000055555555558b in std::function<with_avx::vec (with_avx::vec)>::operator()(with_avx::vec) const (this=0x7fffffffda74,
    __args#0=...) at /usr/include/c++/12/bits/std_function.h:590
#3  0x000055555555528d in main () at stdfunction_test.cpp:39

However if I don't add the flag, the code would run normally.

I think this is possibly caused by some kind of alignment problems like the warning sait I just don't know how to solve this.

My environment is listed on the following, hope they will be useful:

  • g++ version: g++-12 (Ubuntu 12-20220319-1ubuntu1) 12.0.1 20220319 (experimental) [master r12-7719-g8ca61ad148f]
  • OS: Ubuntu-22.04 running on WSL2


2022-09-27

Plotly: how to change the position of the display

I'm very new to Dash. Just trying to move around.

from dash import Dash,html,dcc,Input, Output
import plotly.express as px
import plotly.graph_objects as go
import dash_daq as daq
app.layout = html.Div([
                        html.H1('Billboard'),
                        dcc.Interval(id='input_place'),


                        html.Div([daq.LEDDisplay(
                                        label="Distance",
                                        labelPosition='top',
                                        value=55.99
                        )],style={'width': '25%', 'display': 'inline-block', 'padding': '0 0'})
])

if __name__ == '__main__':
    app.run_server(debug=False)

Output of this is above enter image description here

I just want to move to the center ? how to do that. thank you in advance!!!



Windows Auto Complete Icon List

Hi does anyone knows where is the list of Icons for Windows Auto-Complete?

Notice when you are typing in notepad, browser or anywhere in Windows OS. There is some icons to select when you are typing a text.

I would like to know if where can we find this list.

for example i type bear, i can select a bear icon from the auto-complete. enter image description here

or when i type pizza, there is a pizza icon that i can select.

enter image description here

anyway, just like to see the list of icons available for the auto-complete feature.

Thank you in advance

btw this icons can also be used in git commit and will show in pull request in bitbucket



Terraform & GCP : Error 403 when attempting to introduce impersonation on project-level

I am quite lost when it comes to applying principles that enable service account impersonation...

My terraform project structure has a root module per environment, base for basic infrastructure, dev for the dev environment and prod for the production environment.

terraform-infra-genesis
 ┣ base
 ┃ ┣ ...
 ┃ ┣ impersonators_x_users.tf  <- user email me@domain.com is granted iam.serviceAccounts.getAccessToken role on 'super-admin' here (On all the organization)
 ┃ ┣ ...
 ┃ ┣ providers_x_access_tokens.tf 
 ┃ ┣ service_accounts_x_roles.tf   <- 'dev-admin' service account declared here
 ┃ ┣ terraform.tfstate
 ┃ ┗ terraform.tfstate.backup
 ┣ dev <- Everything here belongs to the dev environment
 ┃ ┣ backend.tf
 ┃ ┣ data_products.tf  <- Usage of the module 'marketing-hub' here
 ┃ ┣ ...
 ┃ ┣ impersonators_x_providers_x_access_tokens.tf  <- Declaration of as_dev_admin provider to 'delegate' ressource creation (such as folders) to the dev environment "super" administrator. I also declared "as_<project>_dev_admin" that should in principle, be able to create ressources only within its own <project>
 ┣ modules
 ┃ ┣ data-products
 ┃ ┃ ┣ cmi
 ┃ ┃ ┃ ┗ feedback-hub
 ┃ ┃ ┗ ddm
 ┃ ┃ ┃ ┣ analytics-hub
 ┃ ┃ ┃ ┣ marketing-hub
 ┃ ┃ ┃ ┃ ┣ marketing_hub.tf <- Usage of module 'data-project' here to bundle all logical projects together.
 ┃ ┃ ┃ ┗ media-hub
 ┃ ┗ data-project
 ┃ ┃ ┣ data_project.tf <- Module to create a GCP project and its project-dev-admin service account here
 ┣ prod
 ┃ ┣ ...
 ┣ .gitignore
 ┗ README.md

As described in the annotations on this structure, I generally use a provider = as_dev_admin within the dev/ root-module. I successfully created dev_coupons (A GCP project) using it.

Here is the structure of my dev/impersonators_x_providers_x_access_tokens.tf

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">=3.85.0"
    }
  }
}

locals {
  tier_1_scopes = [
    "https://www.googleapis.com/auth/cloud-platform",
    "https://www.googleapis.com/auth/userinfo.email",
  ]
  tier_2_scopes = [
    "cloud-platform",
    "userinfo-email",
  ]
}
# Dev Admin impersonation
provider "google" {
  alias  = "impersonation"
  scopes = local.tier_1_scopes
}

data "google_service_account_access_token" "dev-admin" {
  provider               = google.impersonation
  target_service_account = data.terraform_remote_state.base.outputs.service-accounts.dev-admin.email
  scopes                 = local.tier_2_scopes
  lifetime               = "1200s"
}

provider "google" {
  alias        = "as_dev_admin"
  access_token = data.google_service_account_access_token.dev-admin.access_token
  region       = var.region
  zone         = var.zone
}


################################################################################
##################### Impersonation of a service account #######################
############################ as_dev_coupons_admin ##############################
################################################################################
# Copy/paste this block in order to introduce the 
# impersonation of any service account

data "google_service_account_access_token" "dev-coupons-admin" {
  provider               = google.impersonation
  target_service_account = module.marketing-hub-products.projects.coupons.admin_service_account.email
  scopes                 = local.tier_2_scopes
  lifetime               = var.lifetime
}

provider "google" {
  alias        = "as_dev_coupons_admin"
  project      = module.marketing-hub-products.projects.coupons.project_info.project_id
  access_token = data.google_service_account_access_token.dev-coupons-admin.access_token
  region       = var.region
  zone         = var.zone
}

resource "google_service_account_iam_member" "dev-coupons-admin-impersonators" {
  provider = google.as_dev_admin  # Global dev environment admin will grant this permission
  for_each = toset([
    for account in var.user_accs_impersonators_info.as_dev_coupons_admin :
    "${account.acc_type}:${account.acc_details.email}"
  ])

  service_account_id = module.marketing-hub-products.projects.coupons.admin_service_account.name
  role               = "roles/iam.serviceAccountTokenCreator"
  member             = each.value
}


################################################################################
################################### End of #####################################
############################ as_dev_coupons_admin ##############################
################################################################################

My project name is dev-coupons.

When I try to declare the additionnal provider alias as_dev_coupons_admin to a specific project dev-coupons admin, I get this error :

│ Error: googleapi: Error 403: The caller does not have permission, forbidden
│
│   with data.google_service_account_access_token.dev-coupons-admin,
│   on impersonators_x_providers_x_access_tokens.tf line 48, in data "google_service_account_access_token" "dev-coupons-admin":
│   48: data "google_service_account_access_token" "dev-coupons-admin" {
│

I don't understand why creating the "google_service_account_access_token" "dev_coupons_admin" returns a 403... At first, I thought it is because some parent module's provider was interfering, but no, here we are at the base module dev, with the same credentials that created the whole dev environment ressources, with the same associated user email, yet this denial of access is returned.

I then enabled logs export TF_VARS=DEBUG; export TF_LOG_PATH="terraform_log.txt", and I find this line :

---[ REQUEST ]---------------------------------------
POST /v1/projects/-/serviceAccounts/dev-coupons-admin@<redacted_project_id>.iam.gserviceaccount.com:generateAccessToken?alt=json&prettyPrint=false HTTP/1.1
Host: iamcredentials.googleapis.com
User-Agent: google-api-go-client/0.5 Terraform/1.2.9 (+https://www.terraform.io) Terraform-Plugin-SDK/2.10.1 terraform-provider-google/dev
Content-Length: 129
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.18.1 gdcl/0.92.0
Accept-Encoding: gzip

{
 "lifetime": "1200s",
 "scope": [
  "https://www.googleapis.com/auth/cloud-platform",
  "https://www.googleapis.com/auth/userinfo.email"
 ]
}

-----------------------------------------------------: timestamp=2022-09-21T21:25:58.442+0200
2022-09-21T21:25:58.534+0200 [INFO]  provider.terraform-provider-google_v4.36.0_x5: 2022/09/21 21:25:58 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 403 Forbidden
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Wed, 21 Sep 2022 19:26:04 GMT
Server: scaffolding on HTTPServer2
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0

{
  "error": {
    "code": 403,
    "message": "The caller does not have permission",
    "errors": [
      {
        "message": "The caller does not have permission",
        "domain": "global",
        "reason": "forbidden"
      }
    ],
    "status": "PERMISSION_DENIED"
  }
}

Perhaps it is trying to access "all" projects via the designated service account? See - in /v1/projects/-/serviceAccounts/?

If you are able to shed some light on where my understanding is lacking, I would greatly appreciate it.

EDIT : dev-coupons-admin is the owner of dev-coupons



SQL Update table from one database from other table from another database

I am trying to perform an update from a table in one database from another table in another database. One table has a number of lat/lon and I want to update the other one with those based on the matching address.

I have tried this:

UPDATE
    WENS_IMPORT.dbo.new_import 
SET
 WENS_IMPORT.dbo.new_import.lat = WENS.dbo.SUBSCRIPTION.lat,
 WENS_IMPORT.dbo.new_import.lon = WENS.dbo.SUBSCRIPTION.lon
FROM
 WENS.dbo.SUBSCRIPTION AS Table_A
 INNER JOIN WENS_IMPORT.dbo.new_import AS Table_B
    ON Table_A.streetAddress = Table_B.Address
WHERE
 Table_A.account_id = '388' AND Table_A.active = '1'

I thought this was the best route but I keep getting this error returned:

ERROR: The multi-part identifier "WENS.dbo.SUBSCRIPTION.lat" could not be bound. Error Code: 4104

Is this because it is seeing a number of records that match the address?

Any help would be greatly appreciated! Thanks so much!



Open Refine: Exporting nested XML with templating

I have a question regarding the templating option for XML in Open Refine. Is it possible to export data from two columns in a nested XML-structure, if both columns contain multiple values, that need to be split first? Here's an example to illustrate better what I mean. My columns look like this:

Column1 Column2
https://d-nb.info/gnd/119119110;https://d-nb.info/gnd/118529889 Grützner, Eduard von;Elisabeth II., Großbritannien, Königin

Each value separated by semicolon in Column1 has a corresponding value in Column2 in the right order and my desired output would look like this:

<edm:Agent rdf:about="https://d-nb.info/gnd/119119110">
<skos:prefLabel xml:lang="zxx">Grützner, Eduard von</skos:prefLabel>
</edm:Agent>

<edm:Agent rdf:about="https://d-nb.info/gnd/118529889">
<skos:prefLabel xml:lang="zxx">Elisabeth II., Großbritannien, Königin</skos:prefLabel>
</edm:Agent>

I managed to split the values separated by ";" for both columns like this



but I can't find out how to nest the splitted skos:prefLabel into the edm:Agent element. Is that even possible? If not, I would work with seperate columns or another workaround, but I wanted to make sure, if there's a more direct way before.

Thank you! Kristina



Hi, can someone tell me how to replace texts in a pdf file without creating a new pdf file? Or can I overwrite a text file like other editors do? [closed]

I was working with iText7 in Java, and want to know how this can be done today in 2022. Did not find much ways for the same.If not this, can someone explain how vs code or other editors overwrite existing files ie the save option and not the save as option.



2022-09-26

Drop certain rows based on quantity of rows with specific values

I am newer data science and am working on a project to analyze sports statistics. I have a dataset of hockey statistics for a group of players over multiple seasons. Players have anywhere between 1 row to 12 rows representing their season statistics over however many seasons they've played.

Example:

    Player  Season  Pos GP  G   A   P   +/- PIM P/GP    ... PPG PPP SHG SHP OTG GWG S   S%  TOI/GP  FOW%
0   Nathan MacKinnon    2022    1   65  32  56  88  22  42  1.35    ... 7   27  0   0   1   5   299 10.7    21.07   45.4
1   Nathan MacKinnon    2021    1   48  20  45  65  22  37  1.35    ... 8   25  0   0   0   2   206 9.7 20.37   48.5
2   Nathan MacKinnon    2020    1   69  35  58  93  13  12  1.35    ... 12  31  0   0   2   4   318 11.0    21.22   43.1
3   Nathan MacKinnon    2019    1   82  41  58  99  20  34  1.21    ... 12  37  0   0   1   6   365 11.2    22.08   43.7
4   Nathan MacKinnon    2018    1   74  39  58  97  11  55  1.31    ... 12  32  0   1   3   12  284 13.7    19.90   41.9
5   Nathan MacKinnon    2017    1   82  16  37  53  -14 16  0.65    ... 2   14  2   2   2   4   251 6.4 19.95   50.6
6   Nathan MacKinnon    2016    1   72  21  31  52  -4  20  0.72    ... 7   16  0   1   0   6   245 8.6 18.87   48.4
7   Nathan MacKinnon    2015    1   64  14  24  38  -7  34  0.59    ... 3   7   0   0   0   2   192 7.3 17.05   47.0
8   Nathan MacKinnon    2014    1   82  24  39  63  20  26  0.77    ... 8   17  0   0   0   5   241 10.0    17.35   42.9
9   J.T. Compher        2022    2   70  18  15  33  6   25  0.47    ... 4   6   1   1   0   0   102 17.7    16.32   51.4
10  J.T. Compher        2021    2   48  10  8   18  10  19  0.38    ... 1   2   0   0   0   2   47  21.3    14.22   45.9
11  J.T. Compher        2020    2   67  11  20  31  9   18  0.46    ... 1   5   0   3   1   3   106 10.4    16.75   47.7
12  J.T. Compher        2019    2   66  16  16  32  -8  31  0.48    ... 4   9   3   3   0   3   118 13.6    17.48   49.2
13  J.T. Compher        2018    2   69  13  10  23  -29 20  0.33    ... 4   7   2   2   2   3   131 9.9 16.00   45.1
14  J.T. Compher        2017    2   21  3   2   5   0   4   0.24    ... 1   1   0   0   0   1   30  10.0    14.93   47.6
15  Darren Helm         2022    1   68  7   8   15  -5  14  0.22    ... 0   0   1   2   0   1   93  7.5 10.55   44.2
16  Darren Helm         2021    1   47  3   5   8   -3  10  0.17    ... 0   0   0   0   0   0   83  3.6 14.68   66.7
17  Darren Helm         2020    1   68  9   7   16  -6  37  0.24    ... 0   0   1   2   0   0   102 8.8 13.73   53.6
18  Darren Helm         2019    1   61  7   10  17  -11 20  0.28    ... 0   0   1   4   0   0   107 6.5 14.57   44.4
19  Darren Helm         2018    1   75  13  18  31  3   39  0.41    ... 0   0   2   4   0   0   141 9.2 15.57   44.1

[sample of my dataset][1] [1]: https://ift.tt/M4Sz5la

If any player has played more than 6 seasons, I want to drop the row corresponding to Season 2021. This is because COVID drastically shortened the season and it is causing issues as I work with averages.

As you can see from the screenshot, Nathan MacKinnon has played 9 seasons. Across those 9 seasons, except for 2021, he plays in no fewer than 64 games. Due to the shortened season of 2021, he only got 48 games. Removing Season 2021 results in an Average Games Played of 73.75. Keeping Season 2021 in the data, the Average Games Played becomes 70.89.

While not drastic, it compounds into the other metrics as well.

I have been trying this for a little while now, but as I mentioned, I am new to this world and am struggling to figure out how to accomplish this.

I don't want to just completely drop ALL rows for 2021 across all players, though, as some players only have 1-5 years' worth of data and for those players, I need to use as much data as I can and remove 1 row from a player with only 2 seasons would also negatively skew averages.

I would really appreciate some assistance from anyone more experienced than me!



Convert a 3D spectrogram to object file format (.OFF)

Most deformed 3d shapes ever so far without one single correct. ; i.e., this should look like a 3d graph While the html is fine.. I tried most definitely close to everything except the answer.. I'm surprised I didn't come across the answer after so many attempts. 3d arrays in space, much spectrogram graph. Is it disconnected from go.graph obj/y/n-therefore grayscale?3, or, should we "go." in the face/vert structure? I'm happy with the shape/yet.. I'm not sure if I triangulated the stacked faces of f. Let alone np.columnstack the np.ones to get the len and tuple(map(int of the face. Totally confused. BTW can you convert from html? I will use the format mutual frequently so might come in handy.

Code:

import numpy as np import warnings warnings.filterwarnings('ignore')
import os from scipy import signal from scipy.io import wavfile import
matplotlib.pyplot as plt import librosa.display import
plotly.graph_objs as go

 
filename=[]
for filename in os.listdir('/Users/tom/Documents/wav/'):  if  'wav' in
filename:
  sample_rate, sample =
wavfile.read(f'/Users/tom/Documents/wav/{filename}')
  def log_specgram(audio, sample_rate, window_size=20,
                 step_size=10, eps=1e-10):
    nperseg = int(round(window_size * sample_rate / 1e3))
    noverlap = int(round(step_size * sample_rate / 1e3))
    freqs, times, spec = signal.spectrogram(audio,
                                    fs=sample_rate,
                                    window='hann',
                                    nperseg=nperseg,
                                    noverlap=noverlap,
                                    detrend=False)
    return freqs, times, np.log(spec.T.astype(np.float32) + eps)

  def plot_raw_wave(samples):
    plt.figure(figsize=(10, 3))
    plt.title('Audio wave')
    plt.ylabel('Amplitude')
    #ax1.plot(np.linspace(0, sample_rate/len(samples1), sample_rate), samples1)
    plt.plot(samples)

  S = librosa.feature.melspectrogram(sample.astype(float),
sr=sample_rate, n_mels=128)

  log_S = librosa.power_to_db(S, ref=np.max)

  plt.figure(figsize=(12, 4))   librosa.display.specshow(log_S,
sr=sample_rate, x_axis='time', y_axis='mel')   plt.title('Mel power
spectrogram ')   plt.colorbar(format='%+02.0f dB')  
plt.tight_layout()


  freqs, times, spectrogram = log_specgram(sample, sample_rate)   data
= [go.Surface(z=spectrogram.T)]   layout = go.Layout(
           title='Specgtogram 3d',)

  fig = go.Figure(data=data, layout=layout)  
fig.write_html(f'/Users/tom/Documents/{filename}.html')  
 
vertices= #

#np #spectrogram.T*times#S#spectrogram.T#spectrogram/times#freqs/*().' S
 
faces= #
 #log_S.T/times*/log_S.freq^np.log10^

with open(f'/Users/tom/Documents/{filename}.off', 'w') as fh:
      fh.write('OFF\n')#C
      fh.write('{} {} 0\n'.format(len(vertices), len(faces)))
#
      faces_stacked = np.column_stack((
         np.ones(len(faces)) * 3, faces)).astype(np.int64)#4
#
      for v in vertices:
          fh.write("{} {} {}\n".format(*v))
 
      for f in faces_stacked): 
          fh.write("{} {} {} {}\n".format(*f))#####



memory leak in view struct

I'm at my wit's end here. For some reason the following code causes a memory leak and I can't figure out why. If I comment out the contents of the onEditingChanged callback in TableElement there is no leak, if I remove the data binding altogether there is no leak, and if I remove the viewModel and instead just declare mapData as a state in ContentView there is no leak, but that isn't a viable solution for my actual code. Does anyone know what's causing this memory leak? Thanks in advance

Here's my model:

class EditFuelLevelViewModel: ObservableObject {
    
    @Published var mapData: [[Float]] = [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
}

And here's my view:

struct ContentView: View {
    private struct TableElement: View {
        @Binding var data: Float
        @State private var text: String

        init(data: Binding<Float>) {
            self._data = data
            self.text = String(data.wrappedValue)
        }

        var body: some View {
            TextField(
                "Enter Value",
                text: $text,
                onEditingChanged: { editing in
                    if !editing {
                        data = Float(text) ?? 0
                    }
                }
            )
        }
    }

    @StateObject var viewModel: EditFuelLevelViewModel = EditFuelLevelViewModel()
    @State var text = ""

    let timer = Timer.publish(every: 0.5, on: .main, in: .common).autoconnect()

    var body: some View {
        VStack {
            Text(text)
                .onReceive(timer) { test in
                    text = String(test.hashValue)
                }
            TableElement(data: $viewModel.mapData[0][0])
        }
    }
}


rspec: test array of instances

I'm trying to create rspec tests to test an array of instances. Specifically, I want to verify certain attributes of each instance within the array. Is there a way to use rspec to test this scenario?

For example, suppose I have the following array that I want to verify:

[#<Car id:1, buy_date: "2022-10-10", model: "Ford">, 
 #<Car id:2, buy_date: "2021-01-10", model: "Ferrari">, 
 #<Car id:3, buy_date: "2022-03-12", model: "Toyota">]

As my test, I want to check that the buy_date is correct. I tried the following expect statement but I don't think it's meant for arrays of instances so the tests failed when I expected them to pass.

expect(cars).to include([
                have_attributes(
                    buy_date: "2022-10-10"
                ),
                have_attributes(
                    buy_date: "2021-01-10"                   
                ),
                have_attributes(
                    buy_date: "2022-03-12"
                )
            ])

I've also tried it with match_array instead of include but the result was the same.

Any ideas how to use rspec to accomplish this?



Execution failed for task ':app:mapDebugSourceSetPaths

Execution failed for task ':app:mapDebugSourceSetPaths'.

Error while evaluating property 'extraGeneratedResDir' of task ':app:mapDebugSourceSetPaths' Failed to calculate the value of task ':app:mapDebugSourceSetPaths' property 'extraGeneratedResDir'. > Querying the mapped value of provider(interface java.util.Set) before task ':app:processDebugGoogleServices' has completed is not supported

  • Try:

Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:mapDebugSourceSetPaths'. at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:74) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:333) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:320) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:313) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:299) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.lambda$run$0(DefaultPlanExecutor.java:143) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:227) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:218) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:140) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48) Caused by: org.gradle.api.internal.tasks.properties.PropertyEvaluationException: Error while evaluating property 'extraGeneratedResDir' of task ':app:mapDebugSourceSetPaths' at org.gradle.api.internal.tasks.properties.InputParameterUtils.prepareInputParameterValue(InputParameterUtils.java:33) at org.gradle.api.internal.tasks.execution.TaskExecution.lambda$visitRegularInputs$1(TaskExecution.java:315) at org.gradle.internal.execution.fingerprint.impl.DefaultInputFingerprinter$InputCollectingVisitor.visitInputProperty(DefaultInputFingerprinter.java:106) at org.gradle.api.internal.tasks.execution.TaskExecution.visitRegularInputs(TaskExecution.java:315) at org.gradle.internal.execution.fingerprint.impl.DefaultInputFingerprinter.fingerprintInputProperties(DefaultInputFingerprinter.java:61) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.captureExecutionStateWithOutputs(CaptureStateBeforeExecutionStep.java:193) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.lambda$captureExecutionState$1(CaptureStateBeforeExecutionStep.java:141) at org.gradle.internal.execution.steps.BuildOperationStep$1.call(BuildOperationStep.java:37) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.internal.execution.steps.BuildOperationStep.operation(BuildOperationStep.java:34) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.captureExecutionState(CaptureStateBeforeExecutionStep.java:130) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.lambda$execute$0(CaptureStateBeforeExecutionStep.java:75) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:75) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:50) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:249) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:86) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:54) at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:32) at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:21) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:43) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:31) at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:40) at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:287) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:40) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:33) at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:76) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:144) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:133) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:74) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:333) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:320) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:313) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:299) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.lambda$run$0(DefaultPlanExecutor.java:143) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:227) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:218) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:140) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48) Caused by: org.gradle.api.internal.provider.AbstractProperty$PropertyQueryException: Failed to calculate the value of task ':app:mapDebugSourceSetPaths' property 'extraGeneratedResDir'. at org.gradle.api.internal.provider.AbstractProperty.finalizeNow(AbstractProperty.java:239) at org.gradle.api.internal.provider.AbstractProperty.beforeRead(AbstractProperty.java:230) at org.gradle.api.internal.provider.AbstractProperty.calculateOwnValue(AbstractProperty.java:126) at org.gradle.api.internal.provider.AbstractMinimalProvider.getOrNull(AbstractMinimalProvider.java:93) at org.gradle.api.internal.provider.ProviderResolutionStrategy$1.resolve(ProviderResolutionStrategy.java:27) at org.gradle.util.internal.DeferredUtil.unpack(DeferredUtil.java:59) at org.gradle.util.internal.DeferredUtil.unpackOrNull(DeferredUtil.java:49) at org.gradle.api.internal.tasks.properties.InputParameterUtils.prepareInputParameterValue(InputParameterUtils.java:39) at org.gradle.api.internal.tasks.properties.InputParameterUtils.prepareInputParameterValue(InputParameterUtils.java:31) ... 68 more Caused by: org.gradle.api.InvalidUserCodeException: Querying the mapped value of provider(interface java.util.Set) before task ':app:processDebugGoogleServices' has completed is not supported at org.gradle.api.internal.provider.TransformBackedProvider.lambda$beforeRead$0(TransformBackedProvider.java:84) at org.gradle.api.internal.provider.BuildableBackedProvider$1.visitProducerTasks(BuildableBackedProvider.java:56) at org.gradle.api.internal.provider.ValueSupplier$ValueProducer.visitContentProducerTasks(ValueSupplier.java:59) at org.gradle.api.internal.provider.TransformBackedProvider.beforeRead(TransformBackedProvider.java:81) at org.gradle.api.internal.provider.TransformBackedProvider.calculateOwnValue(TransformBackedProvider.java:63) at org.gradle.api.internal.provider.AbstractMinimalProvider.calculateValue(AbstractMinimalProvider.java:103) at org.gradle.api.internal.provider.Collectors$ElementsFromCollectionProvider.collectEntries(Collectors.java:216) at org.gradle.api.internal.provider.AbstractCollectionProperty$CollectingSupplier.calculateValue(AbstractCollectionProperty.java:337) at org.gradle.api.internal.provider.AbstractCollectionProperty.finalValue(AbstractCollectionProperty.java:189) at org.gradle.api.internal.provider.AbstractCollectionProperty.finalValue(AbstractCollectionProperty.java:37) at org.gradle.api.internal.provider.AbstractProperty.finalizeNow(AbstractProperty.java:236) ... 76 more